00:05:20 you should build a TM using a casette tape for magnetic storage, then feed it with one of those "infinite" moebius strip tapes 00:06:24 that would be more complex, because that tape is analog 00:06:37 yeah, I guess 00:08:24 counter + sram won't be bad, they could run up to 66 MHz (or heigher, because in BF the addres would be set with the previus command) 00:10:54 Braintwist would be a possibility: Load the program from IO and swap tapes. Maybe with a hack that uses a rom for init 00:11:35 -!- comex has quit ("Caught sigterm, terminating..."). 00:13:32 and ethernet as IO :D 00:16:11 -!- olsner has quit ("Leaving"). 00:22:06 -!- jix has quit ("CommandQ"). 00:27:33 -!- tusho has joined. 00:42:19 oc2k1: Welcome to the land of Esolangs. 00:42:30 :d 00:52:49 hm? 01:02:49 -!- tusho has quit. 01:05:02 -!- Sgeo has joined. 01:12:47 -!- oerjan has quit ("leaving"). 21:50:58 -!- clog has joined. 21:50:58 -!- clog has joined. 21:52:00 -!- jix has joined. 21:54:02 -!- oklopol has quit (Read error: 60 (Operation timed out)). 21:59:26 -!- caio has quit (Remote closed the connection). 22:01:51 -!- oklofok has joined. 22:02:51 the trick is we're basically doing that garbage collection technique where you mark things you can see; it's just here we do it every time something is deleted 22:05:43 basically, every object has this nice little number attached to it, and there's another, global number, which gets incremented every time a reference is deleted; when a reference is deleted, the things it references are checked to see if their number is the global number == they've been disref'd this cycle, if it's not the same, have their number inc'd, and refs dec'd, and the reference deletion propagates to whatever they reference, if the refcoun 22:06:12 s/have their number inc'd/set their number to the global number/ 22:06:39 ais523: interpretation of oklofok's idea pls 22:06:43 i think this is as fast as refcounting, but errorlessss 22:07:34 oklofok: plz provide impl 22:07:36 how to appear smart: take a simple idea, and explain it confusingly using all kinds of terms invented on the fly 22:07:39 oklofok: what if you have a and b both referencing x, x and y referencing each other? 22:07:42 but yeah i'll try 22:07:51 ais523: dunno, i'll think 22:08:04 and then, say, a is deleted? 22:08:18 what if, instead, x is deleted? 22:08:52 a.ref = x, b.ref=x, y.ref=x, x.ref=y (now we have x.torefs=2) delete a => x.torefs=1, and execution stops 22:09:15 a.ref = x, b.ref=x, y.ref=x, x.ref=y (now we have x.torefs=2) delete x => y.torefs=0 => x and y are deleted 22:09:29 of course, you cannot delete x 22:09:50 hmmhmm 22:10:13 you can only delete local variables, basically, or rather, they are the only things that actually ever are explicitly deleted, although automatically 22:10:22 err 22:10:45 well actually what do i care, x shouldn't be deleted, ever, because x.torefs != 0 22:11:08 ais523: satisfied, disagreeing or confused? 22:11:15 ah, I see, even if x goes out of scope a and b might still need it 22:11:25 whereas a can go out of scope fine 22:11:31 but why is x.torefs 2 not 3 22:11:33 yes, because nothing needs it 22:11:35 err 22:11:42 actually i guess it's 3... 22:12:01 this is exactly the problem i thought i solved, so i must have some idea 22:12:07 so wait a mo :) 22:12:43 right, the problem is we should also be propagating when the refcount does *not* become zero 22:12:52 that's the whole difference between just refcounting and this 22:13:36 so everytime something is removed, we will effectively exhaust everything reachable from it, deccing the refcounts of the whole closure of references 22:14:10 we can probably optimize this quite a lot by having data structures share locally global refcounts or something 22:14:15 whatever that means :D 22:14:23 -!- comex has joined. 22:14:33 oklofok: "locally global" is a great term 22:14:38 yes! 22:14:51 also, it reminds me a bit of the way choicepoint stacks work in C-INTERCAL 22:14:58 the bottom of the stack is shared but the top isn't 22:15:22 ais523: so does it work? 22:15:30 and is it just about as efficient as refcounting? 22:15:38 if it had all the benefits of gc without actually...gcing, that would rock so hard 22:15:50 tusho: I'm not convinced it works yet 22:15:50 and oklofok could probably publish a paper on it :P 22:15:53 and I don't think it's faster 22:15:58 because you're doing the same amount of work 22:16:00 ais523: hm, so you still have to traverse shit 22:16:02 just at a different time 22:16:02 :( 22:16:17 the traversing happens at allocate/finalise, rather than at random intervals 22:16:20 actually, that might be faster 22:16:22 so, basically, for the strongly connected components reference graph, we have a shared refcount, methinks 22:16:30 *components of the reference 22:17:00 oklofok: is it better than gc for actual use would you say>? 22:17:05 or mainly interesting but not useful 22:17:10 probably the latter 22:17:15 aw 22:17:24 though if it works, that is neat 22:17:33 and also reaffirms my theory that generally you should believe oklofok 22:17:40 even if you don't understand wtf he's talking about 22:17:41 oklofok's idea strikes me as the sort of thing that seems likely to have a way of working 22:17:56 although it also strikes me as the sort of thing that doesn't have all the details worked out yet 22:18:01 but it seems plausible as a general idea 22:18:07 ais523: that's all of oklofok's stuff. 22:18:16 wow, oklofok invented Feather 22:18:25 heh 22:18:28 (yes, yes, false syllogism I know) 22:18:40 details generally come when i open python... :P 22:20:23 oklofok: do so! 22:21:16 anyway, i think if something like that was well implemented, it would have cases where refcounting would work as fast as explicit memory deallocation, but that sometimes you would have it propagate deletion information around the variable graph all the time, and things would get n^2 22:21:38 and i cannot open python today, i need to read 22:21:39 -!- kar8nga has left (?). 22:22:24 so, when someone leaves the room, do you whois him to see whether he disliked the subject, or just has a habit of leaving channels before disconnecting? 22:22:25 oklofok: yes, it obviously has different order properties to standard collection 22:22:32 AFAICT it has a better best case and a much worse worst case 22:22:51 oklofok: no, I don't 22:22:52 'has a habit of leaving channels before disconnecting' 22:22:52 lol wut 22:23:05 tusho: it makes sense for people who like to be clean and tidy 22:23:13 but... 22:23:15 for instance I tend to empty the stack in Underload programs before they exit 22:23:16 tusho: it's not that rare. 22:23:18 your client already does that... 22:23:22 and close programs before shutting down the computer 22:23:30 and free() stuff in C just before the program ends 22:23:39 tusho: yes, that's probably the only reason for doing it; this doesn't really change anything 22:23:42 none of that is needed, in theory (in the third case not on a decent OS, at least) 22:23:49 ais523: i do none of those 22:23:51 esp. not closing programs 22:23:54 i have too many open 22:23:55 also 22:24:05 i advocate a data-based approach to computing 22:24:09 not document or program based 22:24:14 so it kind of comes naturally to me 22:24:31 -!- jix has quit (Nick collision from services.). 22:24:45 -!- jix has joined. 22:24:45 well, I like starting from a clean slate when I boot, and closing things down so I can deal with things that happen during the close down 22:24:59 although there usually aren't any 22:25:02 ah, you use a stupid os that reopens programs on startup. 22:25:04 my condolences 22:25:05 also it gives me a sort of wind-down 22:25:07 tusho: no, I don't 22:25:12 it can be set to do that 22:25:15 but I haven't set that setting 22:25:26 i just suspend this computer :q 22:25:48 (btw, I don't exit notification area applications like Akregator or Rhythmbox, even though I do start them by hand, some strange psychology is at work there...) 22:25:56 aaaanyway, this is kinda like connected components, you can do it for the whole graph in O(n) (the GC approach), or find the connected component of every node separately in O(n^2) (the refcounting thing); it's just the latter will actually be closer to O(n) if there are only a few references that cannot be "optimized", and you more easily can do it online 22:26:15 *can more easily 22:26:22 oklofok: well, most programs don't have circular references 22:26:24 or at least, not too many 22:26:35 which is more efficient if you only have a few circular references? 22:26:36 in that case it will be equal to refcounting. 22:26:42 as opposed to Circular Reference Exampleprogram Deluxe 22:26:46 oklofok: what, won't it handle them properly? 22:27:11 hmm 22:27:23 it will handle everything properly, that's not the issue 22:27:25 oklofok: actually I think refcounting will be slower as long as you don't have deep chains of objects 22:27:33 a linked list would really kill a system like yours 22:27:40 oh 22:27:40 hmm... refcounting doesn't like linked lists either, though 22:27:43 ais523: not necessarily 22:28:09 if done naively, then yes 22:28:28 you're planning to optimise this already? 22:28:36 :D 22:28:53 well i was thinking massive amounts of information about the subgraphs between variables 22:29:02 so you can't execute anything 22:29:12 because you'll just end up changing those graphs for ages 22:29:50 but i don't think i'm going to optimize anything at first 22:30:12 but, i think for recursive data structured, you can use a local global refcount 22:31:12 i'm sure i'll have something concrete tomorrow, if i have the time 22:32:06 i think, at least, this refcounting thing might be a useful abstraction for deducing deletion patterns for data structures 22:32:11 which would be kinda nice already 22:32:25 ooh, that'd be nice 22:34:21 for a recursive definition like L = | L, we see the child always has exactly one reference more than its parent, inside this one data structure, so we have the normal refcount for the nodes, and then just the info how deep they are in the structure, and what the refcount of the ultimate parent is 22:34:42 this shouldn't be list-specific, i'm sure it would work for any tree structure 22:34:58 of course, there would be more propagation... :P 22:35:16 * oklofok bangs head to wall 22:35:40 hmm... actually 22:35:57 we don't need to know what the exact refcount is for a node 22:36:11 we just need to know whether it has any refcounts from higher up in the data structure 22:36:23 so, we basically have an "ultimate parent" flag 22:36:36 which only propagates one level down when a parent is deleted 22:36:53 this way i think recursive structures can be optimized entirely 22:37:00 except there's a problem ais523 will point out to me 22:37:23 does it work on the sort of recursive structure that C-INTERCAL uses to hold threads? 22:37:27 that's pretty weird, and complex 22:37:38 an object can be part of any number of data structures, so it will have a list of these ultimate parent flags, and the refcount for other references 22:37:52 i have no idea, what kind are those? 22:38:15 i'm thinking more like something that can be proved to work in all cases 22:38:26 so, i'd say it would work on those 22:38:33 and if not, then i should get back to work 22:47:38 the language for testing, i guess just type definitions, Tree = Node a | Branch (.left Tree) (.right Tree), so you can explicitly give things a type, and then setting variables to things, and setting the references of variables, like {x.ref1 = y}, and {Tree x; x.left = y}, after which x.ref1 wouldn't work anymore, because x would be limited to being a tree 22:48:37 quite a useless language, but i don't think control would complicate the issue at all, at least without concurrency 22:48:45 do you think it would? 22:48:51 a 22:49:23 oklofok: well, the C-INTERCAL thread structure is at its base a singly linked ring 22:49:48 each member of the ring links to a singly linked list of other objects of the same type which aren't part of the ring (possibly empty) 22:50:10 and the tails of those lists can be shared (i.e. after a while the lists point to the same members) 22:50:25 fascinating 22:50:29 also each object has another pointer which can either point to an arbitrary object or to itself 22:50:37 pointing to itself is usual 22:50:57 finally, there's a sort of object that's just a placeholder and can only exist in the singly linked lists 22:51:10 and doesn't have any pointers except the ones that hold the list together 22:51:12 I think that's about it 22:51:25 i see, i see 22:51:36 (you may boggle at the structure of INTERCAL that it requires such a complicated data structure to represent its threading model, if you wish) 22:51:37 and how is this structure used exactly? 22:51:47 well, the ring contains active threads 22:51:51 which take turns executing 22:51:57 the single lists are choicepoint stacks 22:51:59 for each thread in the list 22:52:13 they're made out of dormant threads which aren't executing right now, but could be later 22:52:18 and stale threads which are just placeholders 22:52:29 also, any two threads can either share variables or not share variables 22:52:41 and so there's a pointer to the thread in which a thread's variables are actually stored 22:52:54 which is normally a pointer back to self, but not necessarily if there's sharing going on 22:53:34 when a thread becomes dormant, it takes the next thread, and attaches itself to that thread's linked list? 22:53:47 or what's the point of these lists 22:54:17 a user can create a dormant thread that's a copy of the current one 22:54:21 and push it onto the choicepoint stack 22:54:26 i see 22:54:34 a user can also replace the current thread with the top dormant one on its choicepoint stack 22:54:47 except that if the tail of that stack is shared with another, the thread is deleted instead 22:55:05 (N.B. this requires reference-counting to work correctly) 22:55:16 n.b.? 22:55:24 this makes it possible to backtrack past a fork()-equivalent 22:55:24 news bulletin? 22:55:27 oklofok: nota bene 22:55:30 it's latin for "note well" 22:55:53 but normally in English it's basically like a P.S. it's something that you just mention even though it isn't part of your main point 22:55:57 i see, no wonder i couldn't reverse-engineer that 22:56:26 oklofok: if you had done you'd probably have deduced INTERCAL's entire threading model from scratch 22:56:35 and it's taken years to get it properly confused 22:56:53 reverse-engineer n.b. i meant :P 22:56:58 ah 22:57:29 yes, an idiomatic English abbreviation that's actually an abbreviation for some Latin words which mean something else tends to be quite hard to guess 22:57:50 probably it best translates to "by the way", if you know that idiom 22:58:03 i know that idiom 22:58:23 it's actually somewhat used in finnish 23:00:01 i'm gonna read a bit, and then sleep now, i'll look at that reference language tomorrow 23:00:02 -> 23:00:10 "then sleep now"? 23:00:11 night 23:00:16 aaaanywa -> 23:00:20 *aaaanyway 23:00:30 read, then sleep retroactively through the time you were reading 23:00:37 multitasking via temporal paradox 23:12:23 -!- oc2k1 has quit (Remote closed the connection). 23:22:04 -!- ihope has joined. 23:30:10 -!- oklofok has quit (Read error: 113 (No route to host)). 23:36:13 hooray 23:36:18 finally got yahoo to cancel that domain 23:36:27 actually, I'd better go home now 23:36:33 almost missed my bus... 23:36:35 -!- ais523 has quit ("9"). 23:51:06 -!- olsner has quit ("Leaving"). 23:59:26 -!- Sgeo has joined.