00:05:20 <olsner> you should build a TM using a casette tape for magnetic storage, then feed it with one of those "infinite" moebius strip tapes
00:06:24 <oc2k1> that would be more complex, because that tape is analog
00:08:24 <oc2k1> counter + sram won't be bad, they could run up to 66 MHz (or heigher, because in BF the addres would be set with the previus command)
00:10:54 <oc2k1> Braintwist would be a possibility: Load the program from IO and swap tapes. Maybe with a hack that uses a rom for init
00:11:35 -!- comex has quit ("Caught sigterm, terminating...").
00:13:32 <oc2k1> and ethernet as IO :D
00:16:11 -!- olsner has quit ("Leaving").
00:22:06 -!- jix has quit ("CommandQ").
00:27:33 -!- tusho has joined.
00:42:19 <pikhq> oc2k1: Welcome to the land of Esolangs.
01:02:49 -!- tusho has quit.
01:05:02 -!- Sgeo has joined.
01:12:47 -!- oerjan has quit ("leaving").
21:50:58 -!- clog has joined.
21:50:58 -!- clog has joined.
21:52:00 -!- jix has joined.
21:54:02 -!- oklopol has quit (Read error: 60 (Operation timed out)).
21:59:26 -!- caio has quit (Remote closed the connection).
22:01:51 -!- oklofok has joined.
22:02:51 <oklofok> the trick is we're basically doing that garbage collection technique where you mark things you can see; it's just here we do it every time something is deleted
22:05:43 <oklofok> basically, every object has this nice little number attached to it, and there's another, global number, which gets incremented every time a reference is deleted; when a reference is deleted, the things it references are checked to see if their number is the global number == they've been disref'd this cycle, if it's not the same, have their number inc'd, and refs dec'd, and the reference deletion propagates to whatever they reference, if the refcoun
22:06:12 <oklofok> s/have their number inc'd/set their number to the global number/
22:06:39 <tusho> ais523: interpretation of oklofok's idea pls
22:06:43 <oklofok> i think this is as fast as refcounting, but errorlessss
22:07:34 <tusho> oklofok: plz provide impl
22:07:36 <oklofok> how to appear smart: take a simple idea, and explain it confusingly using all kinds of terms invented on the fly
22:07:39 <ais523> oklofok: what if you have a and b both referencing x, x and y referencing each other?
22:08:04 <ais523> and then, say, a is deleted?
22:08:18 <ais523> what if, instead, x is deleted?
22:08:52 <oklofok> a.ref = x, b.ref=x, y.ref=x, x.ref=y (now we have x.torefs=2) delete a => x.torefs=1, and execution stops
22:09:15 <oklofok> a.ref = x, b.ref=x, y.ref=x, x.ref=y (now we have x.torefs=2) delete x => y.torefs=0 => x and y are deleted
22:09:29 <oklofok> of course, you cannot delete x
22:10:13 <oklofok> you can only delete local variables, basically, or rather, they are the only things that actually ever are explicitly deleted, although automatically
22:10:45 <oklofok> well actually what do i care, x shouldn't be deleted, ever, because x.torefs != 0
22:11:08 <oklofok> ais523: satisfied, disagreeing or confused?
22:11:15 <ais523> ah, I see, even if x goes out of scope a and b might still need it
22:11:25 <ais523> whereas a can go out of scope fine
22:11:31 <ais523> but why is x.torefs 2 not 3
22:11:33 <oklofok> yes, because nothing needs it
22:11:42 <oklofok> actually i guess it's 3...
22:12:01 <oklofok> this is exactly the problem i thought i solved, so i must have some idea
22:12:43 <oklofok> right, the problem is we should also be propagating when the refcount does *not* become zero
22:12:52 <oklofok> that's the whole difference between just refcounting and this
22:13:36 <oklofok> so everytime something is removed, we will effectively exhaust everything reachable from it, deccing the refcounts of the whole closure of references
22:14:10 <oklofok> we can probably optimize this quite a lot by having data structures share locally global refcounts or something
22:14:23 -!- comex has joined.
22:14:33 <ais523> oklofok: "locally global" is a great term
22:14:51 <ais523> also, it reminds me a bit of the way choicepoint stacks work in C-INTERCAL
22:14:58 <ais523> the bottom of the stack is shared but the top isn't
22:15:22 <tusho> ais523: so does it work?
22:15:30 <tusho> and is it just about as efficient as refcounting?
22:15:38 <tusho> if it had all the benefits of gc without actually...gcing, that would rock so hard
22:15:50 <ais523> tusho: I'm not convinced it works yet
22:15:50 <tusho> and oklofok could probably publish a paper on it :P
22:15:53 <ais523> and I don't think it's faster
22:15:58 <ais523> because you're doing the same amount of work
22:16:00 <tusho> ais523: hm, so you still have to traverse shit
22:16:02 <ais523> just at a different time
22:16:17 <ais523> the traversing happens at allocate/finalise, rather than at random intervals
22:16:20 <ais523> actually, that might be faster
22:16:22 <oklofok> so, basically, for the strongly connected components reference graph, we have a shared refcount, methinks
22:16:30 <oklofok> *components of the reference
22:17:00 <tusho> oklofok: is it better than gc for actual use would you say>?
22:17:05 <tusho> or mainly interesting but not useful
22:17:24 <tusho> though if it works, that is neat
22:17:33 <tusho> and also reaffirms my theory that generally you should believe oklofok
22:17:40 <tusho> even if you don't understand wtf he's talking about
22:17:41 <ais523> oklofok's idea strikes me as the sort of thing that seems likely to have a way of working
22:17:56 <ais523> although it also strikes me as the sort of thing that doesn't have all the details worked out yet
22:18:01 <ais523> but it seems plausible as a general idea
22:18:07 <tusho> ais523: that's all of oklofok's stuff.
22:18:16 <ais523> wow, oklofok invented Feather
22:18:28 <ais523> (yes, yes, false syllogism I know)
22:18:40 <oklofok> details generally come when i open python... :P
22:21:16 <oklofok> anyway, i think if something like that was well implemented, it would have cases where refcounting would work as fast as explicit memory deallocation, but that sometimes you would have it propagate deletion information around the variable graph all the time, and things would get n^2
22:21:38 <oklofok> and i cannot open python today, i need to read
22:21:39 -!- kar8nga has left (?).
22:22:24 <oklofok> so, when someone leaves the room, do you whois him to see whether he disliked the subject, or just has a habit of leaving channels before disconnecting?
22:22:25 <ais523> oklofok: yes, it obviously has different order properties to standard collection
22:22:32 <ais523> AFAICT it has a better best case and a much worse worst case
22:22:52 <tusho> 'has a habit of leaving channels before disconnecting'
22:23:05 <ais523> tusho: it makes sense for people who like to be clean and tidy
22:23:15 <ais523> for instance I tend to empty the stack in Underload programs before they exit
22:23:16 <oklofok> tusho: it's not that rare.
22:23:18 <tusho> your client already does that...
22:23:22 <ais523> and close programs before shutting down the computer
22:23:30 <ais523> and free() stuff in C just before the program ends
22:23:39 <oklofok> tusho: yes, that's probably the only reason for doing it; this doesn't really change anything
22:23:42 <ais523> none of that is needed, in theory (in the third case not on a decent OS, at least)
22:23:49 <tusho> ais523: i do none of those
22:23:51 <tusho> esp. not closing programs
22:23:54 <tusho> i have too many open
22:24:05 <tusho> i advocate a data-based approach to computing
22:24:09 <tusho> not document or program based
22:24:14 <tusho> so it kind of comes naturally to me
22:24:31 -!- jix has quit (Nick collision from services.).
22:24:45 -!- jix has joined.
22:24:45 <ais523> well, I like starting from a clean slate when I boot, and closing things down so I can deal with things that happen during the close down
22:24:59 <ais523> although there usually aren't any
22:25:02 <tusho> ah, you use a stupid os that reopens programs on startup.
22:25:05 <ais523> also it gives me a sort of wind-down
22:25:12 <ais523> it can be set to do that
22:25:15 <ais523> but I haven't set that setting
22:25:26 <tusho> i just suspend this computer :q
22:25:48 <ais523> (btw, I don't exit notification area applications like Akregator or Rhythmbox, even though I do start them by hand, some strange psychology is at work there...)
22:25:56 <oklofok> aaaanyway, this is kinda like connected components, you can do it for the whole graph in O(n) (the GC approach), or find the connected component of every node separately in O(n^2) (the refcounting thing); it's just the latter will actually be closer to O(n) if there are only a few references that cannot be "optimized", and you more easily can do it online
22:26:22 <tusho> oklofok: well, most programs don't have circular references
22:26:24 <tusho> or at least, not too many
22:26:35 <tusho> which is more efficient if you only have a few circular references?
22:26:36 <oklofok> in that case it will be equal to refcounting.
22:26:42 <tusho> as opposed to Circular Reference Exampleprogram Deluxe
22:26:46 <tusho> oklofok: what, won't it handle them properly?
22:27:23 <oklofok> it will handle everything properly, that's not the issue
22:27:25 <ais523> oklofok: actually I think refcounting will be slower as long as you don't have deep chains of objects
22:27:33 <ais523> a linked list would really kill a system like yours
22:27:40 <ais523> hmm... refcounting doesn't like linked lists either, though
22:28:28 <ais523> you're planning to optimise this already?
22:28:53 <oklofok> well i was thinking massive amounts of information about the subgraphs between variables
22:29:02 <oklofok> so you can't execute anything
22:29:12 <oklofok> because you'll just end up changing those graphs for ages
22:29:50 <oklofok> but i don't think i'm going to optimize anything at first
22:30:12 <oklofok> but, i think for recursive data structured, you can use a local global refcount
22:31:12 <oklofok> i'm sure i'll have something concrete tomorrow, if i have the time
22:32:06 <oklofok> i think, at least, this refcounting thing might be a useful abstraction for deducing deletion patterns for data structures
22:32:11 <oklofok> which would be kinda nice already
22:32:25 <tusho> ooh, that'd be nice
22:34:21 <oklofok> for a recursive definition like L = <smth> | L, we see the child always has exactly one reference more than its parent, inside this one data structure, so we have the normal refcount for the nodes, and then just the info how deep they are in the structure, and what the refcount of the ultimate parent is
22:34:42 <oklofok> this shouldn't be list-specific, i'm sure it would work for any tree structure
22:34:58 <oklofok> of course, there would be more propagation... :P
22:35:57 <oklofok> we don't need to know what the exact refcount is for a node
22:36:11 <oklofok> we just need to know whether it has any refcounts from higher up in the data structure
22:36:23 <oklofok> so, we basically have an "ultimate parent" flag
22:36:36 <oklofok> which only propagates one level down when a parent is deleted
22:36:53 <oklofok> this way i think recursive structures can be optimized entirely
22:37:00 <oklofok> except there's a problem ais523 will point out to me
22:37:23 <ais523> does it work on the sort of recursive structure that C-INTERCAL uses to hold threads?
22:37:27 <ais523> that's pretty weird, and complex
22:37:38 <oklofok> an object can be part of any number of data structures, so it will have a list of these ultimate parent flags, and the refcount for other references
22:37:52 <oklofok> i have no idea, what kind are those?
22:38:15 <oklofok> i'm thinking more like something that can be proved to work in all cases
22:38:26 <oklofok> so, i'd say it would work on those
22:38:33 <oklofok> and if not, then i should get back to work
22:47:38 <oklofok> the language for testing, i guess just type definitions, Tree = Node a | Branch (.left Tree) (.right Tree), so you can explicitly give things a type, and then setting variables to things, and setting the references of variables, like {x.ref1 = y}, and {Tree x; x.left = y}, after which x.ref1 wouldn't work anymore, because x would be limited to being a tree
22:48:37 <oklofok> quite a useless language, but i don't think control would complicate the issue at all, at least without concurrency
22:49:23 <ais523> oklofok: well, the C-INTERCAL thread structure is at its base a singly linked ring
22:49:48 <ais523> each member of the ring links to a singly linked list of other objects of the same type which aren't part of the ring (possibly empty)
22:50:10 <ais523> and the tails of those lists can be shared (i.e. after a while the lists point to the same members)
22:50:29 <ais523> also each object has another pointer which can either point to an arbitrary object or to itself
22:50:37 <ais523> pointing to itself is usual
22:50:57 <ais523> finally, there's a sort of object that's just a placeholder and can only exist in the singly linked lists
22:51:10 <ais523> and doesn't have any pointers except the ones that hold the list together
22:51:12 <ais523> I think that's about it
22:51:36 <ais523> (you may boggle at the structure of INTERCAL that it requires such a complicated data structure to represent its threading model, if you wish)
22:51:37 <oklofok> and how is this structure used exactly?
22:51:47 <ais523> well, the ring contains active threads
22:51:51 <ais523> which take turns executing
22:51:57 <ais523> the single lists are choicepoint stacks
22:51:59 <ais523> for each thread in the list
22:52:13 <ais523> they're made out of dormant threads which aren't executing right now, but could be later
22:52:18 <ais523> and stale threads which are just placeholders
22:52:29 <ais523> also, any two threads can either share variables or not share variables
22:52:41 <ais523> and so there's a pointer to the thread in which a thread's variables are actually stored
22:52:54 <ais523> which is normally a pointer back to self, but not necessarily if there's sharing going on
22:53:34 <oklofok> when a thread becomes dormant, it takes the next thread, and attaches itself to that thread's linked list?
22:53:47 <oklofok> or what's the point of these lists
22:54:17 <ais523> a user can create a dormant thread that's a copy of the current one
22:54:21 <ais523> and push it onto the choicepoint stack
22:54:34 <ais523> a user can also replace the current thread with the top dormant one on its choicepoint stack
22:54:47 <ais523> except that if the tail of that stack is shared with another, the thread is deleted instead
22:55:05 <ais523> (N.B. this requires reference-counting to work correctly)
22:55:24 <ais523> this makes it possible to backtrack past a fork()-equivalent
22:55:30 <ais523> it's latin for "note well"
22:55:53 <ais523> but normally in English it's basically like a P.S. it's something that you just mention even though it isn't part of your main point
22:55:57 <oklofok> i see, no wonder i couldn't reverse-engineer that
22:56:26 <ais523> oklofok: if you had done you'd probably have deduced INTERCAL's entire threading model from scratch
22:56:35 <ais523> and it's taken years to get it properly confused
22:56:53 <oklofok> reverse-engineer n.b. i meant :P
22:57:29 <ais523> yes, an idiomatic English abbreviation that's actually an abbreviation for some Latin words which mean something else tends to be quite hard to guess
22:57:50 <ais523> probably it best translates to "by the way", if you know that idiom
22:58:23 <oklofok> it's actually somewhat used in finnish
23:00:01 <oklofok> i'm gonna read a bit, and then sleep now, i'll look at that reference language tomorrow
23:00:30 <ais523> read, then sleep retroactively through the time you were reading
23:00:37 <ais523> multitasking via temporal paradox
23:12:23 -!- oc2k1 has quit (Remote closed the connection).
23:22:04 -!- ihope has joined.
23:30:10 -!- oklofok has quit (Read error: 113 (No route to host)).
23:36:18 <tusho> finally got yahoo to cancel that domain
23:36:27 <ais523> actually, I'd better go home now
23:36:33 <ais523> almost missed my bus...
23:36:35 -!- ais523 has quit ("9").
23:51:06 -!- olsner has quit ("Leaving").
23:59:26 -!- Sgeo has joined.