00:02:19 -!- oerjan has joined.
00:09:55 -!- variable has joined.
00:13:32 -!- trout has quit (Ping timeout: 265 seconds).
00:26:00 -!- Sgeo_ has joined.
00:27:58 -!- Sgeo has quit (Ping timeout: 264 seconds).
00:33:04 -!- imode has quit (Ping timeout: 260 seconds).
00:34:35 -!- imode has joined.
00:35:02 -!- mertyildiran has quit (Quit: Page closed).
00:44:14 -!- trout has joined.
00:46:42 -!- variable has quit (Ping timeout: 256 seconds).
00:56:34 <imode> CMV: graph rewriting cannot be done without using variables and binding said variables to subgraphs.
01:00:40 <alercah> imode: define graph rewriting and variables here
01:01:53 <oerjan> . o O ( channel mini ...what?... )
01:02:22 <oerjan> . o O ( darn, fooled by PPCG chat )
01:02:54 <imode> alercah: graph rewriting, rewriting over graphs. lhs -> rhs rules, only both the lhs and rhs are graphs. lhs is a pattern (that may include variables on edges, nodes, etc.) that's to be found in the given graph, and rhs is the pattern that's supposed to replace it.
01:03:16 <alercah> so "variable" here means a label?
01:03:46 <imode> "variable" means a variable. naively, something you can store something in, namely a node label, an edge label, or an entire subgraph.
01:03:59 <alercah> I know off the top of my head of two formulations of HR grammars, one that work in terms of an algebra of equations, like how a CFG works
01:04:07 <alercah> the other works with labeled vertices
01:04:41 <oerjan> . o O ( highly recursive )
01:04:47 <alercah> start with a hypergraph, with labeled verticces
01:05:04 <rain1> whats graph rewriting for?
01:05:43 <alercah> each rule is G -> H where G and H are hypergraphs
01:06:07 <alercah> they must have a certain number of distinguished vertices, the same in each
01:06:23 <imode> do you have some example rules and transformations? you have my curiosity.
01:06:40 <alercah> if you have a label match between a hyperedge in your graph, and the distinguished vertices in G, then you replace the edge with H
01:07:00 <alercah> you could do it without labels in theory, but labels get you a lot more power
01:07:17 <imode> unlabeled is what I'm after in the long run.
01:07:20 <alercah> because otherwise you don't have much control over how matches work
01:07:41 <alercah> if you just want to eliminate labels, you could just invent unique gadgets to replace labels
01:07:56 <imode> yeah, just unique local "shapes".
01:08:14 <alercah> imode: do you know the series-parallel graphs?
01:08:17 <imode> the problem is I can't wrap my head around how you'd specify certain arrangements of nodes and edges.
01:08:54 <alercah> the recursive procedure to build them up can be expressed as an HR grammar
01:09:36 <imode> so let's say I have, for example, a loop that I want to extend. let's say I have a cycle of length 5, with nodes A, B, C, D, E, and F arranged linearly, with F connected to A.
01:10:02 <imode> I want to match any loop of 5 nodes, and then add a node after the last node. what would the rule look like for that.
01:10:22 <imode> okay, so you can't add or remove vertices.
01:10:32 <alercah> you can replace an edge with a graph
01:10:50 <alercah> 1 sec, let me make sure I get this right
01:10:57 <imode> sure, thanks for even bothering.
01:11:22 <alercah> imode: this was my masters thesis :)
01:11:34 <imode> came to the right person then. :P
01:11:50 <alercah> imode: so do you know the definition of series-parallel graphs by reduction?
01:12:10 <imode> also the thing that sparked my interest was https://www.slideshare.net/slidarko/computing-with-directed-labeled-graphs-3879998
01:13:29 <alercah> a graph is series-parallel if it can be reduced to P_2 by removing duplicate edges and by contracting a vertex of degree 2 (other than the endpoints)
01:14:31 <imode> contracting a vertex of degree 2. in or out degree.
01:14:37 <imode> or are we talking about undirected, sorry.
01:15:04 <alercah> oh wait my bad, I got this construction of HR slightly wrong T_T
01:15:10 <alercah> (I worked with the algebraic one)
01:15:24 <imode> I take it you're somewhere around computer engineering?
01:15:36 -!- variable has joined.
01:15:37 <alercah> the thesis was in graph structure theory
01:16:55 <alercah> the labeling is on edges, not vertices
01:17:03 <alercah> and it denotes the nonterminals
01:17:19 <imode> I don't think the rules could possibly apply to all hypergraphs then, could it?
01:17:25 <alercah> so e.g. for series-parallel, we have one nonterminal S
01:18:20 <alercah> and we have rules v-S-w -> v-S-x-S-w, and v-S-W -> v=SS=w
01:18:35 <alercah> but the first one is meant to be series construction and the second one is parallel
01:18:48 <alercah> or in other words, subdivision and duplication
01:19:06 -!- trout has quit (Ping timeout: 260 seconds).
01:19:21 <alercah> and then you have one rule v-S-w -> v-w
01:19:26 <alercah> which just eliminates the nonterminal
01:19:28 <imode> that's pretty interesting notation, I can see the forking and merging.
01:19:34 <alercah> yeah this is just invented right now for IRC
01:19:38 <alercah> I would not recommend it in general :)
01:20:09 <alercah> (hypergraphs are often depicted with, say, cicles for edges and boxes for vertices, as otherwise they get pretty ambiguous pretty fast)
01:20:30 <alercah> so the series-parallel graphs are the ones you get starting from one S-edge and then replacing edges
01:20:35 <imode> lmao, I figured. but that's interesting. so you can derive any "reasonable" circuit from those two rules.
01:20:51 <alercah> the result is well-known, it just happens to be a nice simple example of an HR grammar
01:21:04 <imode> but that does use variables.
01:21:08 <imode> to bind to labels.
01:21:17 <alercah> it uses nonterminal labels
01:21:23 <alercah> but in a similar way to how a CFG does
01:22:23 <alercah> you can use labels to restrict the bindings, yeah
01:22:56 <imode> so I guess I'd ask.. how would I use hyperedge rewriting to do general graph manipulations like I detailed above?
01:23:01 <imode> that's kind of what I'm after.
01:23:37 <imode> ah. that's still interesting though, I didn't know there was a process for deriving those kinds of graphs.
01:23:50 <alercah> it's actually the start of a rabbithole of deep results
01:24:06 <imode> I can understand why you don't want labeled verts, because in those graphs, all you _really_ care about are the edges.
01:25:10 <alercah> nothing wrong with using labels to carry information, as well
01:25:47 <alercah> the HR grammars are nice because they naturally correspond to the CFGs over strings
01:25:57 <alercah> there are other kinds, though, like vertex replacement
01:26:18 <alercah> similar idea, but they use vertices as the replacement point, rather than edges
01:26:22 <imode> what corresponds to unrestricted grammars?
01:26:44 <alercah> I'm not sure off the top of my head
01:28:49 <alercah> might be worth thinking about it in terms of the algebraic formulations
01:28:55 <alercah> the other way of expressing HR grammars
01:29:04 <alercah> and that was the one I was thinking of initially
01:30:02 <alercah> so in this system, you work over graphs where each graph has some labeled vertices, no two identical
01:31:01 <alercah> you can freely rename sources, or make a source into a non-source
01:31:30 <alercah> (technically renaming needs to preserve uniqueness so it's best expressed as swapping to keep things mathematically simple, but that's not necessarily relevant)
01:31:54 <alercah> and then you can do a parallel composition, where you glue two graphs together at matching sources
01:32:11 <imode> huuuuuuuh. now _that's_ interesting. I can see how they're equivalent.
01:32:47 <alercah> this can be expressed as an algebra
01:32:53 <alercah> (that is, a bunch of things and some operations on them)
01:33:04 <alercah> and then you can view a grammar as a system of equations
01:34:54 <alercah> e.g. for series-parallel, S = st | S // S [parallel construction] | rename[t->p](S) // rename[s->p](S) [series construction]
01:35:06 <alercah> st is a constant single edge with ends s and t
01:35:26 <alercah> and then S // S is parallel since it joins two graphs at s and t
01:35:41 <alercah> and the series construction is obtained by taking two graphs, renaming opposite ends of them to p, then merging
01:35:49 <alercah> so p is fused but s and t remain as the endpoints
01:36:00 <alercah> (oh I forgot you have to add a forget[p] operation)
01:37:06 <alercah> the graphs generated by this grammar are just the minimal solution to the system of equations
01:37:56 <alercah> inherently, though, it can't do the sort of structural replacement you're looking for because it is focused on having a sort of exposed surface to glue graphs together, if you will
01:38:30 <alercah> you could use it to extend a cycle, but not sensitively to the size
01:38:45 <imode> yeah I can see that. but from what I'm seeing, the gluing operation can be used to construct the desired graph.
01:38:52 <imode> just with a specific sequence of operations.
01:39:08 <imode> correct me if I'm wrong on that.
01:39:48 <alercah> but it turns out you cannot create a grammar in this system that will have unbounded cycle size
01:39:54 <alercah> (assuming you are restricting to finite graphs)
01:40:22 <alercah> the fundamental limitation is on the fact that you can only "keep track" of finitely many vertices
01:40:47 <alercah> so eventually you'd lose track of how big the cycle is and just keep expanding it indefinitely
01:41:11 <alercah> I was thinking slightly wrongly
01:41:17 <imode> 's all good. I think wrongly all the time.
01:41:53 <alercah> you can of course do that because the series-parallel graphs do that :P
01:42:02 <alercah> you can't have unbounded cliques though
01:42:17 <alercah> because once you forget a vertex, its relationships are finalized
01:43:24 <alercah> and this kind of model can't take edges away
01:43:42 <alercah> so once you have a 5-cycle, you can't turn it into a 6-cycle
01:45:30 <alercah> CFGs basically work on the same underlying technology
01:45:37 <imode> yeah, just over strings and not graphs..
01:45:49 <imode> and they have the same limitation of finite-ness...
01:46:13 <alercah> I guess the unrestricted grammar version would probably just be allowing arbitrary equations on the left-hand side as well
01:46:31 <imode> yeah. I think that'd solve the vertex amnesia.
01:46:54 <imode> and actually allow you to construct arbitrarily large cycles.
01:47:43 <alercah> you also can do this with different underlying algebras
01:47:51 <alercah> since HR is far from the only reasonable algebra on graphs
01:48:35 <alercah> (there's some tricksy technology that's used to define what it means to be an equational set i.e. generated set here, but I think the general idea is sound)
01:48:35 <imode> given. all sorts of graph grammars and equivalent algebras to explore. :P
01:49:22 <alercah> I'd probably point you at Courcelle if you want to read more :)
01:49:31 -!- trout has joined.
01:49:35 <alercah> his papers cover a lot of relevant stuff
01:50:06 <imode> say no more fam, I've got it logged down to research.
01:51:29 <alercah> there's also some cool relation to the regular languages here: you can generalize the idea of a regular language in a couple of different ways (e.g. myhill-nerode construction, DFAs)
01:51:40 <alercah> (and a homomorphism-based one)
01:52:17 <alercah> they all work out to be the same thing
01:52:26 -!- variable has quit (Ping timeout: 256 seconds).
01:52:29 <alercah> but the relationship between recognizable sets (this generalization) and equational sets is not subset like with strings
01:52:50 <alercah> but you do get the filtering theorem, that the intersection of equational and recognizable is equational
01:55:00 -!- S_Gautam has quit (Quit: Connection closed for inactivity).
02:00:38 <lambdabot> ENVA 270150Z 10010KT CAVOK 21/13 Q1020 RMK WIND 670FT 09006KT
02:03:18 <alercah> imode: did I change your view :P
02:07:39 <imode> ehh, not realy. to actually implement your concepts, you still need binding. :P
02:07:55 <imode> and it doesn't generalize to all graphs (I'm interested in working over semantic nets).
02:08:03 <imode> but by god what you covered was interesting as hell.
02:16:09 -!- MDude has quit (Ping timeout: 268 seconds).
02:22:50 -!- variable has joined.
02:25:58 -!- trout has quit (Ping timeout: 265 seconds).
03:00:49 -!- trout has joined.
03:04:06 -!- variable has quit (Ping timeout: 260 seconds).
03:33:48 -!- variable has joined.
03:36:50 -!- trout has quit (Ping timeout: 256 seconds).
03:42:50 -!- variable has quit (Quit: /dev/null is full).
03:42:52 <esowiki> [[ZZT-Flip]] https://esolangs.org/w/index.php?diff=57061&oldid=56952 * Zzo38 * (+258)
04:12:17 -!- sprocklem has quit (Quit: sprocklem).
04:12:38 -!- sprocklem has joined.
04:13:29 -!- sprocklem has quit (Client Quit).
04:16:52 -!- sprocklem has joined.
04:45:20 -!- MDude has joined.
05:24:35 -!- tromp has joined.
05:48:36 -!- oerjan has quit (Quit: Nite).
06:06:53 -!- tromp has quit (Remote host closed the connection).
06:26:08 -!- sftp has quit (Ping timeout: 244 seconds).
06:29:21 -!- XorSwap has joined.
06:38:38 -!- moei has quit (Quit: Leaving...).
06:40:18 -!- moei has joined.
06:40:41 -!- tromp has joined.
06:45:18 -!- tromp has quit (Ping timeout: 260 seconds).
06:45:48 -!- variable has joined.
06:47:42 -!- variable has quit (Client Quit).
07:21:47 -!- SopaXorzTaker has joined.
07:37:14 -!- wob_jonas has joined.
07:47:04 -!- imode has quit (Ping timeout: 260 seconds).
08:15:57 -!- S_Gautam has joined.
08:18:11 -!- XorSwap has quit (Quit: the creeping crawling chaos will return.).
08:26:41 -!- tromp has joined.
08:31:24 -!- tromp has quit (Ping timeout: 260 seconds).
10:01:43 -!- tromp has joined.
10:06:29 -!- tromp has quit (Ping timeout: 260 seconds).
10:08:13 -!- SopaXorzTaker has quit (Remote host closed the connection).
10:52:37 -!- variable has joined.
11:13:07 -!- fungot has quit (Ping timeout: 245 seconds).
11:13:25 -!- fungot has joined.
11:24:19 -!- trout has joined.
11:27:47 -!- variable has quit (Ping timeout: 265 seconds).
11:48:16 -!- tromp has joined.
11:53:14 -!- tromp has quit (Ping timeout: 260 seconds).
11:55:49 -!- variable has joined.
11:59:00 -!- trout has quit (Ping timeout: 245 seconds).
12:13:48 -!- ais523 has joined.
12:14:20 <ais523> trying to prove Stun Step TC has been very interesting
12:15:02 <ais523> the issue is that we don't have any TCness proofs for reversible counter machines yet, and it's fairly difficult to emulate anything using a reversible counter machine other than other reversible counter machines
12:15:17 <ais523> I've been working on a high-level (as esolangs go) language to bridge the gap
12:15:28 <ais523> but it's probably going to need an IDE to be effectively able to program in
12:15:45 <wob_jonas> I have a theory question, not specifically for you, but for anyone here
12:16:50 <wob_jonas> it's well-known that a finite state machine with two-stacks using some finite alphabet in the stacks, with UB on stack underflow, is Turing-complete, and you can simulate a one-tape Turing-machine on that, right?
12:18:21 <ais523> yes, the well-known variant is TC assuming sufficiently powerful control flow
12:18:47 <ais523> oh, you also need the alphabet to contain at least one character
12:18:56 <ais523> err, at least two, if underflow is UB
12:19:08 <ais523> (you use one as an EOF character and the other as a counter)
12:19:38 <wob_jonas> The control is that the program has any finite number of states, and each state that pops has different followup states for each possible letter it can pop.
12:19:58 <wob_jonas> The alphabets are large enough, you could say the program tells how large, but finite.
12:20:17 <wob_jonas> The separate alphabets for both stacks are large enough but finite that is.
12:21:14 <ais523> that's TC so far (e.g. you can implement StackFlow in it)
12:21:37 <wob_jonas> I know it's TC. What I'm saying is that it's also well-known to be TC.
12:21:51 <wob_jonas> For my variant, what I need to be well-known is that it can simulate a Turing-machine with an oracle, where the program can interactively query the oracle for a string from an alphabet, and get a yes or no answer, and the program can call the oracle any number of times.
12:22:23 <ais523> you want a stack machine with an oracle?
12:22:30 <wob_jonas> And the machine I want to use for this is the two-stack machine, except it has a special state when it queries the oracle, with two followup states, and importantly, the entire first stack is the input to the oracle.
12:23:18 <wob_jonas> So what I want well-known is that the machine can temporarily save all its state to its second stack, not using the first stack or the control state, and continue, and call the oracle with arbitrary input strings from the alphabet this way.
12:23:24 <ais523> I'm not sure if it's well-known, but it's true
12:23:52 <wob_jonas> The oracle input may need to be in a specific format, and can't have extra leading or trailing symbols or anything.
12:24:11 <ais523> yes, that's not a problem
12:25:10 <wob_jonas> I could relax the requirement about the control state, but using only one stack temporarily and a specific format on the first stack is mandatory for my proof.
12:25:29 <ais523> the way to think about it is that if you have one stack (as opposed to a queue or a tape), the thing restricting you from TCness is that you have nowhere to store the top elements of your stack while you read the bottom ones
12:25:41 <wob_jonas> And I also want this to run this with a reasonable speed, with at most polynomial slowdown in the number of steps.
12:25:46 <ais523> so you have to throw them away (a PDA) or else store them elsewhere, and a second stack is sufficient to store them elsewhere
12:26:12 <ais523> but you can use the second stack for other things while you don't need it as a temporary
12:26:48 <ais523> if you want to switch into "oracle mode", you do shuffling around to effectively put a sequence of imperative commands onto the main stack (each of these corresponds to a state that runs the command in question)
12:27:06 <wob_jonas> It's like the problem with those string replacement languages: you have to move the oracle input from somewhere on the second stack to the first stack, and only that input, with some letter swaps, which requires a lot of states but who cares.
12:27:10 <ais523> "push symbol X on oracle stack", "pop oracle stack", "run oracle", "go to state Y"
12:27:38 <ais523> then you switch to a state that simply repeatedly pops from the main stack and runs the appropriate command
12:27:38 -!- trout has joined.
12:28:08 <ais523> (if the oracle is outputting in state form, you need two states to continue running the program after it returns, one for true, one for false, so that you use the state as a temporary to remember the oracle's answer)
12:28:28 <ais523> this should be an O(n) slowdown where n is the size of the stack, so if the program was polynomial it'll stay polynomial
12:28:57 <ais523> interestingly, I have something like this as a primitive that I put in higher-level stack-based languages
12:29:46 <wob_jonas> How I'm trying to use this is that the oracle is going to be a random-access disk storing an infinity of bits, and you give the address to read or write in (say) binary to the oracle, plus tell it whether to write zero, write one, or read.
12:30:04 <ais523> it's the opposite of "infra" from Joy, so I call it "ultra"; what it does is converts the entire stack into a single function, which when executed pushes the data in question back onto the stack
12:30:08 <wob_jonas> Also, for reasons, the disk requires you to write a bit before you read it, otherwise it's UB.
12:30:45 <wob_jonas> And this way a stack machine with that disk as oracle can "efficiently" simulate a RAM machine or a heap cons allocation machine, where the slowdown is a polylog factor of the runtime.
12:30:51 <ais523> if you're trying to implement it in a language that doesn't have it, you normally need to alter commands to ones that record the stack height, but apart from that it isn't terrible
12:31:11 -!- variable has quit (Ping timeout: 255 seconds).
12:31:49 <ais523> hmm, looks like I have an implementation lying around: (~aa(n:^)~*(*)*^)n:^
12:32:01 <ais523> (this is Underload + a primitive n for determining whether the stack is empty)
12:32:24 <ais523> you do need to know where the end of the main stack is but in this construction, you can just use a separate symbol for it
12:32:56 <ais523> of course, the stack model you're using doesn't allow for arbitrarily complex elements
12:33:13 <ais523> although, hmm, I wonder if you could use something with paired ( and ) characters to delimit the elements
12:33:49 <wob_jonas> no, I definitely need a finite alphabet
12:33:53 <ais523> that'd work better in a three-stack system than a two-stack system, though (one for holding the program, one for holding the data, one for parsing the data)
12:34:01 <wob_jonas> for the arbitrarily complex data, I store it in the disk oracle
12:34:16 <wob_jonas> yes, but I can't afford three stacks. I need two stacks for this proof.
12:34:25 <ais523> wob_jonas: what you've got me thinking about is an Underload variant that runs on a finite-alphabet stack machine without a separate encoding layer
12:34:39 <ais523> it's easily doable in three
12:34:41 <ais523> two might be possible though
12:34:42 <wob_jonas> I could get a bit more relaxed with states, like, I don't insist on only one oracle state, but two stacks is important.
12:34:54 <ais523> via using the state machine to store data
12:36:36 <wob_jonas> This is for the proof of how powerful Consumer Society is, although for practical programs you don't need all this complexity, you can get away with a simpler construction that lets you access a huge but finite amount of data.
12:37:11 <ais523> ugh, you do need three, so this approach won't work
12:37:46 <ais523> mine for solving your problem
12:37:58 <wob_jonas> oh, the "underload with finite-alphabet stack"
12:38:09 <wob_jonas> I don't even understand what that would supposed to be look like
12:38:31 <ais523> ['(',':','a','S','S',')',':','a','S','S']
12:38:39 <ais523> i.e. the parentheses are elements in the alphabet
12:39:17 <ais523> I was just picking a well-known Underload program
12:40:16 <ais523> so the problem with stack machines is that they're bad at copying data
12:41:10 <ais523> even without supporting nested parentheses, Underload's ':' seems difficult (impossible?) to implement without the help of a third stack or a separate encoding layer
12:41:34 <ais523> like, let's pose this as a problem: suppose you have a stack of 'a' and 'b', and an EOF character (say '$')
12:42:00 <ais523> suppose you allow any finite number of additional characters, and you have a second stack that starts empty
12:42:09 <ais523> can you duplicate the contents of the stack in polynomial time?
12:43:09 <wob_jonas> first you change each a to ac and each b to bd
12:43:24 <ais523> and then you bubble-sort
12:43:29 <wob_jonas> then you do a bubble sort to bubble up the c and d
12:43:45 <wob_jonas> should be quadratic even on two stacks
12:43:54 <wob_jonas> you just need a larger alphabet and enough states
12:43:55 <ais523> this is possibly the first serious use of bubble sort I've seen :-D
12:44:17 <ais523> actually I think this gives a general method for implementing two stacks with three
12:44:23 <ais523> err, implementing three stacks with two
12:44:24 <wob_jonas> you need to encode in the state whether you need another pass of the sort
12:44:40 <ais523> or in fact any number of stacks with two
12:44:58 <ais523> use one stack that simply contains an arbitrary interleaving of all but one of the rest (using different symbols for the various stacks)
12:45:02 <wob_jonas> ais523: yes, I had some sort of question similar to that earlier
12:45:14 -!- imode has joined.
12:46:05 <ais523> actually you don't even need a bubble sort for this, just move all the information onto your temp stack until you find an element of the stack you're looking for
12:46:13 <ais523> delete it, encode it in the state, and move all the information back again
12:46:31 <ais523> this should also serve as a proof that you can keep one stack with entirely arbitrary information, because all the stacks you're using for computation are encoded on the other
12:46:59 <ais523> this is only an O(n) slowdown, I think? accessing any element of any stack takes a length of time bounded by the total number of elements in all stacks
12:47:37 <wob_jonas> o(n) multiplicative factor sslowdown
12:48:17 <ais523> if the program was running in O(a) space and O(b) time, it now runs in O(a) space and O(ab) time
12:48:33 <ais523> which is actually a fairly good complexity result
12:48:43 <wob_jonas> which is actually important, to track the space and time separately
12:48:46 <ais523> given that space complexities are normally smaller than time complexities
12:49:05 <ais523> (with some definitions they're always smaller because it takes time to fill all that space)
12:49:31 <wob_jonas> the space here is going to be only polylog of the time, because the space will only be used to keep a bounded number of addresses on the disk, as many addresses as the simulated machine has registers, plus a few temporaries
12:49:47 <ais523> the simulated machine is sub-TC?
12:50:13 <wob_jonas> and I care about efficient runtime
12:50:18 <ais523> oh, the addresses get arbitrarily large
12:50:38 <wob_jonas> but only logarithmically large, because they're in binary and I allocate them sequentially
12:50:50 <wob_jonas> I don't care about garbage collection
12:50:55 <ais523> so yes, this runs with polylog slowdown I think
12:53:14 <wob_jonas> this will prove that Consumer Society is efficient in theory, as efficient as, say, SKI calculus or whatever reasonable computational model with unbounded memory size, up to polylog slowdown
12:53:49 -!- trout has quit (Ping timeout: 265 seconds).
12:53:59 <wob_jonas> and I can separately prove that it's also sort of efficient in practice if you only need compile time bounded sized memory, although not really so efficient that you'd want to run real programs in it, just eso-efficient
12:54:27 <wob_jonas> and that part is easier to program by far, no need to swap symbols from an alphabet on two stacks or such crazy stuff all the time
12:54:36 <ais523> well, I admit to being guilty of writing languages that run in double exponential time before now…
12:55:16 <ais523> although the reason my TCness proof of 3* stalled out is that I wanted to make a more efficient version of the proof
12:55:20 <wob_jonas> Amycus implemented the naive way is much worse than double-exponential
12:55:23 <ais523> involving a more efficient cyclic tag
12:55:33 <wob_jonas> that's why I describe how to implement it properly
12:56:24 <ais523> there's no point in having complexity classes beyond double-exponential because they're all far too slow to even comprehend
12:56:28 <ais523> let alone attempt to run programs in
12:56:43 <ais523> (this includes double-exponential as one of the classes that's too slow to comprehend)
13:34:06 -!- tromp has joined.
13:38:30 -!- tromp has quit (Ping timeout: 256 seconds).
13:41:47 <ais523> wob_jonas: it's surprisingly bothering to see someone say nothing for 15 minutes, say "hmm" once, then go silent for another 15 minutes
13:42:28 <wob_jonas> I was just thinking about this disk model a bit more
13:54:52 -!- ais523 has quit (Remote host closed the connection).
13:56:55 -!- ais523 has joined.
14:00:42 -!- ais523 has quit (Excess Flood).
14:01:57 -!- ais523 has joined.
14:03:25 -!- ais523 has quit (Remote host closed the connection).
14:03:38 -!- ais523 has joined.
14:26:37 -!- SopaXorzTaker has joined.
14:27:06 -!- tromp has joined.
14:31:56 -!- tromp has quit (Ping timeout: 265 seconds).
14:40:40 -!- S_Gautam has quit (Quit: Connection closed for inactivity).
14:52:14 -!- imode has quit (Ping timeout: 265 seconds).
15:03:40 <HackEso> olist 1130: shachaf oerjan Sgeo FireFly boily nortti b_jonas
15:20:27 -!- tromp has joined.
15:24:49 -!- tromp has quit (Ping timeout: 248 seconds).
15:46:36 -!- wob_jonas has quit (Quit: http://www.kiwiirc.com/ - A hand crafted IRC client).
16:13:15 -!- tromp has joined.
16:18:04 -!- tromp has quit (Ping timeout: 260 seconds).
16:30:35 -!- sftp has joined.
16:38:48 -!- Decim_The_Dark has joined.
16:39:36 -!- Decim_The_Dark has changed nick to Decim_.
16:41:10 -!- Decim_ has left.
17:24:15 -!- ais523 has quit (Remote host closed the connection).
17:25:27 -!- ais523 has joined.
17:37:35 -!- ais523 has quit (Quit: quit).
17:46:06 -!- arseniiv has joined.
17:58:44 -!- tromp has joined.
18:02:41 -!- S_Gautam has joined.
18:03:58 -!- tromp has quit (Ping timeout: 268 seconds).
18:13:02 -!- Phantom_Hoover has joined.
18:23:40 -!- sebbu has quit (Ping timeout: 256 seconds).
18:27:57 -!- sebbu has joined.
18:28:52 -!- tromp has joined.
18:39:03 -!- tromp has quit (Remote host closed the connection).
18:40:02 <esowiki> [[Brainfuck in Python]] N https://esolangs.org/w/index.php?oldid=57062 * Kaa-kun * (+1212) Brainfuck in Python
18:41:30 <esowiki> [[Brainfuck in Python]] https://esolangs.org/w/index.php?diff=57063&oldid=57062 * Kaa-kun * (+124) Correction
18:49:03 <esowiki> [[Esolang talk:Community portal]] https://esolangs.org/w/index.php?diff=57064&oldid=55607 * Kaa-kun * (+432) /* Important message from Carbuncle */ new section
20:07:54 <zzo38> My high score so far in "Into the Blue" is 36690 points. Did you play that game? It is like a cross between Panel de Pon and 15 Puzzle.
20:12:54 -!- arseniiv has quit (Ping timeout: 256 seconds).
20:17:21 -!- SopaXorzTaker has quit (Quit: Leaving).
20:45:59 <esowiki> [[Special:Log/newusers]] create * Sylwester * New user account
20:57:27 -!- impomatic has quit (Ping timeout: 240 seconds).
20:59:41 -!- zzo38 has quit (Ping timeout: 255 seconds).
21:05:12 -!- zzo38 has joined.
21:41:07 -!- tromp has joined.
21:45:37 -!- tromp has quit (Ping timeout: 248 seconds).
22:16:16 -!- erkin has quit (Quit: Ouch! Got SIGIRL, dying...).
22:55:19 -!- imode has joined.
23:09:19 -!- MDude has quit (Ping timeout: 260 seconds).
23:58:11 -!- MDude has joined.