00:02:19 -!- oerjan has joined. 00:09:55 -!- variable has joined. 00:13:32 -!- trout has quit (Ping timeout: 265 seconds). 00:26:00 -!- Sgeo_ has joined. 00:27:58 -!- Sgeo has quit (Ping timeout: 264 seconds). 00:33:04 -!- imode has quit (Ping timeout: 260 seconds). 00:34:35 -!- imode has joined. 00:35:02 -!- mertyildiran has quit (Quit: Page closed). 00:44:14 -!- trout has joined. 00:46:42 -!- variable has quit (Ping timeout: 256 seconds). 00:56:34 CMV: graph rewriting cannot be done without using variables and binding said variables to subgraphs. 01:00:26 OH MAN 01:00:40 imode: define graph rewriting and variables here 01:01:53 . o O ( channel mini ...what?... ) 01:01:59 change my view. 01:02:22 . o O ( darn, fooled by PPCG chat ) 01:02:54 alercah: graph rewriting, rewriting over graphs. lhs -> rhs rules, only both the lhs and rhs are graphs. lhs is a pattern (that may include variables on edges, nodes, etc.) that's to be found in the given graph, and rhs is the pattern that's supposed to replace it. 01:03:16 so "variable" here means a label? 01:03:46 "variable" means a variable. naively, something you can store something in, namely a node label, an edge label, or an entire subgraph. 01:03:59 I know off the top of my head of two formulations of HR grammars, one that work in terms of an algebra of equations, like how a CFG works 01:04:07 the other works with labeled vertices 01:04:15 HR grammars? 01:04:28 to google! 01:04:29 hyperedge replacement 01:04:33 ah. 01:04:41 . o O ( highly recursive ) 01:04:47 start with a hypergraph, with labeled verticces 01:05:04 whats graph rewriting for? 01:05:43 each rule is G -> H where G and H are hypergraphs 01:06:01 alright... 01:06:05 hm. 01:06:07 they must have a certain number of distinguished vertices, the same in each 01:06:23 do you have some example rules and transformations? you have my curiosity. 01:06:40 if you have a label match between a hyperedge in your graph, and the distinguished vertices in G, then you replace the edge with H 01:07:00 you could do it without labels in theory, but labels get you a lot more power 01:07:17 unlabeled is what I'm after in the long run. 01:07:20 because otherwise you don't have much control over how matches work 01:07:41 if you just want to eliminate labels, you could just invent unique gadgets to replace labels 01:07:56 yeah, just unique local "shapes". 01:08:14 imode: do you know the series-parallel graphs? 01:08:17 the problem is I can't wrap my head around how you'd specify certain arrangements of nodes and edges. 01:08:24 in circuits? 01:08:27 yeah 01:08:31 yeah. 01:08:54 the recursive procedure to build them up can be expressed as an HR grammar 01:09:03 ...interesting. 01:09:36 so let's say I have, for example, a loop that I want to extend. let's say I have a cycle of length 5, with nodes A, B, C, D, E, and F arranged linearly, with F connected to A. 01:10:02 I want to match any loop of 5 nodes, and then add a node after the last node. what would the rule look like for that. 01:10:11 HR doesn't do that 01:10:14 it only replaces edges 01:10:22 okay, so you can't add or remove vertices. 01:10:25 you can 01:10:32 you can replace an edge with a graph 01:10:39 wat 01:10:50 1 sec, let me make sure I get this right 01:10:57 sure, thanks for even bothering. 01:11:12 what ab out regex 01:11:22 imode: this was my masters thesis :) 01:11:27 o shit. 01:11:32 (hence the OH MAN) 01:11:34 came to the right person then. :P 01:11:50 imode: so do you know the definition of series-parallel graphs by reduction? 01:11:57 uh, shoot. 01:12:10 also the thing that sparked my interest was https://www.slideshare.net/slidarko/computing-with-directed-labeled-graphs-3879998 01:13:29 a graph is series-parallel if it can be reduced to P_2 by removing duplicate edges and by contracting a vertex of degree 2 (other than the endpoints) 01:14:31 contracting a vertex of degree 2. in or out degree. 01:14:37 or are we talking about undirected, sorry. 01:14:53 undirected 01:15:04 oh wait my bad, I got this construction of HR slightly wrong T_T 01:15:05 gotcha. okay. 01:15:10 (I worked with the algebraic one) 01:15:12 lmao, it's cool. 01:15:24 I take it you're somewhere around computer engineering? 01:15:36 -!- variable has joined. 01:15:37 the thesis was in graph structure theory 01:15:44 I'm a SWE at Google atm 01:15:49 damn. sweet. 01:16:55 the labeling is on edges, not vertices 01:17:03 and it denotes the nonterminals 01:17:07 alright. 01:17:19 I don't think the rules could possibly apply to all hypergraphs then, could it? 01:17:25 so e.g. for series-parallel, we have one nonterminal S 01:18:20 and we have rules v-S-w -> v-S-x-S-w, and v-S-W -> v=SS=w 01:18:27 kind of silly notation 01:18:35 but the first one is meant to be series construction and the second one is parallel 01:18:48 or in other words, subdivision and duplication 01:19:06 -!- trout has quit (Ping timeout: 260 seconds). 01:19:21 and then you have one rule v-S-w -> v-w 01:19:26 which just eliminates the nonterminal 01:19:28 that's pretty interesting notation, I can see the forking and merging. 01:19:34 yeah this is just invented right now for IRC 01:19:38 I would not recommend it in general :) 01:20:09 (hypergraphs are often depicted with, say, cicles for edges and boxes for vertices, as otherwise they get pretty ambiguous pretty fast) 01:20:30 so the series-parallel graphs are the ones you get starting from one S-edge and then replacing edges 01:20:35 lmao, I figured. but that's interesting. so you can derive any "reasonable" circuit from those two rules. 01:20:40 yep 01:20:51 the result is well-known, it just happens to be a nice simple example of an HR grammar 01:21:04 but that does use variables. 01:21:08 to bind to labels. 01:21:17 it uses nonterminal labels 01:21:23 but in a similar way to how a CFG does 01:21:35 e.g. S -> xSw 01:21:50 ah. 01:22:23 you can use labels to restrict the bindings, yeah 01:22:56 so I guess I'd ask.. how would I use hyperedge rewriting to do general graph manipulations like I detailed above? 01:23:01 that's kind of what I'm after. 01:23:16 HR can't do that I think 01:23:37 ah. that's still interesting though, I didn't know there was a process for deriving those kinds of graphs. 01:23:50 it's actually the start of a rabbithole of deep results 01:24:06 I can understand why you don't want labeled verts, because in those graphs, all you _really_ care about are the edges. 01:25:03 yeah 01:25:10 nothing wrong with using labels to carry information, as well 01:25:16 tru. 01:25:47 the HR grammars are nice because they naturally correspond to the CFGs over strings 01:25:57 there are other kinds, though, like vertex replacement 01:26:18 similar idea, but they use vertices as the replacement point, rather than edges 01:26:22 what corresponds to unrestricted grammars? 01:26:44 I'm not sure off the top of my head 01:28:49 might be worth thinking about it in terms of the algebraic formulations 01:28:55 the other way of expressing HR grammars 01:29:04 and that was the one I was thinking of initially 01:29:11 hrmmmmm. 01:29:43 but I mixed the two up 01:30:02 so in this system, you work over graphs where each graph has some labeled vertices, no two identical 01:30:06 (called sources) 01:31:01 you can freely rename sources, or make a source into a non-source 01:31:30 (technically renaming needs to preserve uniqueness so it's best expressed as swapping to keep things mathematically simple, but that's not necessarily relevant) 01:31:54 and then you can do a parallel composition, where you glue two graphs together at matching sources 01:32:11 huuuuuuuh. now _that's_ interesting. I can see how they're equivalent. 01:32:47 this can be expressed as an algebra 01:32:53 (that is, a bunch of things and some operations on them) 01:33:04 and then you can view a grammar as a system of equations 01:34:54 e.g. for series-parallel, S = st | S // S [parallel construction] | rename[t->p](S) // rename[s->p](S) [series construction] 01:35:06 st is a constant single edge with ends s and t 01:35:26 and then S // S is parallel since it joins two graphs at s and t 01:35:41 and the series construction is obtained by taking two graphs, renaming opposite ends of them to p, then merging 01:35:49 so p is fused but s and t remain as the endpoints 01:36:00 (oh I forgot you have to add a forget[p] operation) 01:36:05 (heh) 01:37:06 the graphs generated by this grammar are just the minimal solution to the system of equations 01:37:45 wild. 01:37:56 inherently, though, it can't do the sort of structural replacement you're looking for because it is focused on having a sort of exposed surface to glue graphs together, if you will 01:38:30 you could use it to extend a cycle, but not sensitively to the size 01:38:45 yeah I can see that. but from what I'm seeing, the gluing operation can be used to construct the desired graph. 01:38:52 just with a specific sequence of operations. 01:39:08 correct me if I'm wrong on that. 01:39:17 yep 01:39:48 but it turns out you cannot create a grammar in this system that will have unbounded cycle size 01:39:54 (assuming you are restricting to finite graphs) 01:40:22 the fundamental limitation is on the fact that you can only "keep track" of finitely many vertices 01:40:47 so eventually you'd lose track of how big the cycle is and just keep expanding it indefinitely 01:40:57 err wait 01:41:01 not all cycles 01:41:07 bleh, ignore that 01:41:11 I was thinking slightly wrongly 01:41:17 's all good. I think wrongly all the time. 01:41:53 you can of course do that because the series-parallel graphs do that :P 01:42:02 you can't have unbounded cliques though 01:42:17 because once you forget a vertex, its relationships are finalized 01:43:24 and this kind of model can't take edges away 01:43:42 so once you have a 5-cycle, you can't turn it into a 6-cycle 01:44:46 ah. 01:45:30 CFGs basically work on the same underlying technology 01:45:37 yeah, just over strings and not graphs.. 01:45:41 yeah 01:45:49 and they have the same limitation of finite-ness... 01:46:13 I guess the unrestricted grammar version would probably just be allowing arbitrary equations on the left-hand side as well 01:46:31 yeah. I think that'd solve the vertex amnesia. 01:46:54 and actually allow you to construct arbitrarily large cycles. 01:47:37 it might 01:47:43 you also can do this with different underlying algebras 01:47:51 since HR is far from the only reasonable algebra on graphs 01:48:35 (there's some tricksy technology that's used to define what it means to be an equational set i.e. generated set here, but I think the general idea is sound) 01:48:35 given. all sorts of graph grammars and equivalent algebras to explore. :P 01:49:22 I'd probably point you at Courcelle if you want to read more :) 01:49:31 -!- trout has joined. 01:49:35 his papers cover a lot of relevant stuff 01:50:06 say no more fam, I've got it logged down to research. 01:51:29 there's also some cool relation to the regular languages here: you can generalize the idea of a regular language in a couple of different ways (e.g. myhill-nerode construction, DFAs) 01:51:40 (and a homomorphism-based one) 01:52:17 they all work out to be the same thing 01:52:26 -!- variable has quit (Ping timeout: 256 seconds). 01:52:29 but the relationship between recognizable sets (this generalization) and equational sets is not subset like with strings 01:52:50 but you do get the filtering theorem, that the intersection of equational and recognizable is equational 01:55:00 -!- S_Gautam has quit (Quit: Connection closed for inactivity). 02:00:36 @metar ENVA 02:00:38 ENVA 270150Z 10010KT CAVOK 21/13 Q1020 RMK WIND 670FT 09006KT 02:02:16 upcoming heatwave 02:03:18 imode: did I change your view :P 02:07:39 ehh, not realy. to actually implement your concepts, you still need binding. :P 02:07:55 and it doesn't generalize to all graphs (I'm interested in working over semantic nets). 02:08:03 but by god what you covered was interesting as hell. 02:16:09 -!- MDude has quit (Ping timeout: 268 seconds). 02:22:50 -!- variable has joined. 02:25:58 -!- trout has quit (Ping timeout: 265 seconds). 03:00:49 -!- trout has joined. 03:04:06 -!- variable has quit (Ping timeout: 260 seconds). 03:33:48 -!- variable has joined. 03:36:50 -!- trout has quit (Ping timeout: 256 seconds). 03:42:50 -!- variable has quit (Quit: /dev/null is full). 03:42:52 [[ZZT-Flip]] https://esolangs.org/w/index.php?diff=57061&oldid=56952 * Zzo38 * (+258) 04:12:17 -!- sprocklem has quit (Quit: sprocklem). 04:12:38 -!- sprocklem has joined. 04:13:29 -!- sprocklem has quit (Client Quit). 04:16:52 -!- sprocklem has joined. 04:45:20 -!- MDude has joined. 05:24:35 -!- tromp has joined. 05:48:36 -!- oerjan has quit (Quit: Nite). 06:06:53 -!- tromp has quit (Remote host closed the connection). 06:26:08 -!- sftp has quit (Ping timeout: 244 seconds). 06:29:21 -!- XorSwap has joined. 06:38:38 -!- moei has quit (Quit: Leaving...). 06:40:18 -!- moei has joined. 06:40:41 -!- tromp has joined. 06:45:18 -!- tromp has quit (Ping timeout: 260 seconds). 06:45:48 -!- variable has joined. 06:47:42 -!- variable has quit (Client Quit). 07:21:47 -!- SopaXorzTaker has joined. 07:37:14 -!- wob_jonas has joined. 07:47:04 -!- imode has quit (Ping timeout: 260 seconds). 08:15:57 -!- S_Gautam has joined. 08:18:11 -!- XorSwap has quit (Quit: the creeping crawling chaos will return.). 08:26:41 -!- tromp has joined. 08:31:24 -!- tromp has quit (Ping timeout: 260 seconds). 10:01:43 -!- tromp has joined. 10:06:29 -!- tromp has quit (Ping timeout: 260 seconds). 10:08:13 -!- SopaXorzTaker has quit (Remote host closed the connection). 10:52:37 -!- variable has joined. 11:13:07 -!- fungot has quit (Ping timeout: 245 seconds). 11:13:25 -!- fungot has joined. 11:24:19 -!- trout has joined. 11:27:47 -!- variable has quit (Ping timeout: 265 seconds). 11:48:16 -!- tromp has joined. 11:53:14 -!- tromp has quit (Ping timeout: 260 seconds). 11:55:49 -!- variable has joined. 11:59:00 -!- trout has quit (Ping timeout: 245 seconds). 12:13:48 -!- ais523 has joined. 12:14:20 trying to prove Stun Step TC has been very interesting 12:15:02 the issue is that we don't have any TCness proofs for reversible counter machines yet, and it's fairly difficult to emulate anything using a reversible counter machine other than other reversible counter machines 12:15:17 I've been working on a high-level (as esolangs go) language to bridge the gap 12:15:28 but it's probably going to need an IDE to be effectively able to program in 12:15:32 hi ais523 12:15:41 hi 12:15:45 I have a theory question, not specifically for you, but for anyone here 12:16:50 it's well-known that a finite state machine with two-stacks using some finite alphabet in the stacks, with UB on stack underflow, is Turing-complete, and you can simulate a one-tape Turing-machine on that, right? 12:16:56 but I want a variant of this 12:18:21 yes, the well-known variant is TC assuming sufficiently powerful control flow 12:18:47 oh, you also need the alphabet to contain at least one character 12:18:56 err, at least two, if underflow is UB 12:19:08 (you use one as an EOF character and the other as a counter) 12:19:21 what's your variant? 12:19:38 The control is that the program has any finite number of states, and each state that pops has different followup states for each possible letter it can pop. 12:19:58 The alphabets are large enough, you could say the program tells how large, but finite. 12:20:17 The separate alphabets for both stacks are large enough but finite that is. 12:21:14 that's TC so far (e.g. you can implement StackFlow in it) 12:21:37 I know it's TC. What I'm saying is that it's also well-known to be TC. 12:21:44 yes 12:21:51 For my variant, what I need to be well-known is that it can simulate a Turing-machine with an oracle, where the program can interactively query the oracle for a string from an alphabet, and get a yes or no answer, and the program can call the oracle any number of times. 12:22:23 you want a stack machine with an oracle? 12:22:30 And the machine I want to use for this is the two-stack machine, except it has a special state when it queries the oracle, with two followup states, and importantly, the entire first stack is the input to the oracle. 12:22:37 ooh! I see 12:23:18 So what I want well-known is that the machine can temporarily save all its state to its second stack, not using the first stack or the control state, and continue, and call the oracle with arbitrary input strings from the alphabet this way. 12:23:24 I'm not sure if it's well-known, but it's true 12:23:52 The oracle input may need to be in a specific format, and can't have extra leading or trailing symbols or anything. 12:24:11 yes, that's not a problem 12:25:10 I could relax the requirement about the control state, but using only one stack temporarily and a specific format on the first stack is mandatory for my proof. 12:25:29 the way to think about it is that if you have one stack (as opposed to a queue or a tape), the thing restricting you from TCness is that you have nowhere to store the top elements of your stack while you read the bottom ones 12:25:41 And I also want this to run this with a reasonable speed, with at most polynomial slowdown in the number of steps. 12:25:46 so you have to throw them away (a PDA) or else store them elsewhere, and a second stack is sufficient to store them elsewhere 12:26:12 but you can use the second stack for other things while you don't need it as a temporary 12:26:48 if you want to switch into "oracle mode", you do shuffling around to effectively put a sequence of imperative commands onto the main stack (each of these corresponds to a state that runs the command in question) 12:27:06 It's like the problem with those string replacement languages: you have to move the oracle input from somewhere on the second stack to the first stack, and only that input, with some letter swaps, which requires a lot of states but who cares. 12:27:10 "push symbol X on oracle stack", "pop oracle stack", "run oracle", "go to state Y" 12:27:38 then you switch to a state that simply repeatedly pops from the main stack and runs the appropriate command 12:27:38 -!- trout has joined. 12:27:47 Yeah. 12:28:08 (if the oracle is outputting in state form, you need two states to continue running the program after it returns, one for true, one for false, so that you use the state as a temporary to remember the oracle's answer) 12:28:28 this should be an O(n) slowdown where n is the size of the stack, so if the program was polynomial it'll stay polynomial 12:28:57 interestingly, I have something like this as a primitive that I put in higher-level stack-based languages 12:29:46 How I'm trying to use this is that the oracle is going to be a random-access disk storing an infinity of bits, and you give the address to read or write in (say) binary to the oracle, plus tell it whether to write zero, write one, or read. 12:30:04 it's the opposite of "infra" from Joy, so I call it "ultra"; what it does is converts the entire stack into a single function, which when executed pushes the data in question back onto the stack 12:30:08 Also, for reasons, the disk requires you to write a bit before you read it, otherwise it's UB. 12:30:45 And this way a stack machine with that disk as oracle can "efficiently" simulate a RAM machine or a heap cons allocation machine, where the slowdown is a polylog factor of the runtime. 12:30:51 if you're trying to implement it in a language that doesn't have it, you normally need to alter commands to ones that record the stack height, but apart from that it isn't terrible 12:31:11 -!- variable has quit (Ping timeout: 255 seconds). 12:31:49 hmm, looks like I have an implementation lying around: (~aa(n:^)~*(*)*^)n:^ 12:32:01 (this is Underload + a primitive n for determining whether the stack is empty) 12:32:24 you do need to know where the end of the main stack is but in this construction, you can just use a separate symbol for it 12:32:56 of course, the stack model you're using doesn't allow for arbitrarily complex elements 12:33:13 although, hmm, I wonder if you could use something with paired ( and ) characters to delimit the elements 12:33:49 no, I definitely need a finite alphabet 12:33:53 that'd work better in a three-stack system than a two-stack system, though (one for holding the program, one for holding the data, one for parsing the data) 12:34:01 for the arbitrarily complex data, I store it in the disk oracle 12:34:16 yes, but I can't afford three stacks. I need two stacks for this proof. 12:34:25 wob_jonas: what you've got me thinking about is an Underload variant that runs on a finite-alphabet stack machine without a separate encoding layer 12:34:39 it's easily doable in three 12:34:41 two might be possible though 12:34:42 I could get a bit more relaxed with states, like, I don't insist on only one oracle state, but two stacks is important. 12:34:54 via using the state machine to store data 12:36:36 This is for the proof of how powerful Consumer Society is, although for practical programs you don't need all this complexity, you can get away with a simpler construction that lets you access a huge but finite amount of data. 12:37:11 ugh, you do need three, so this approach won't work 12:37:37 why do I need three? 12:37:43 no, not your approach 12:37:46 mine for solving your problem 12:37:58 oh, the "underload with finite-alphabet stack" 12:38:09 I don't even understand what that would supposed to be look like 12:38:31 ['(',':','a','S','S',')',':','a','S','S'] 12:38:39 i.e. the parentheses are elements in the alphabet 12:39:02 what's with 'S','S' 12:39:09 ^ul (:aSS):aSS 12:39:09 (:aSS):aSS 12:39:16 oh 12:39:17 I was just picking a well-known Underload program 12:39:31 a quine 12:39:34 yes 12:40:16 so the problem with stack machines is that they're bad at copying data 12:40:24 of arbitrary size 12:40:42 yes, or rearranging data 12:40:52 but yes, copying is bad too 12:41:10 even without supporting nested parentheses, Underload's ':' seems difficult (impossible?) to implement without the help of a third stack or a separate encoding layer 12:41:34 like, let's pose this as a problem: suppose you have a stack of 'a' and 'b', and an EOF character (say '$') 12:42:00 suppose you allow any finite number of additional characters, and you have a second stack that starts empty 12:42:09 can you duplicate the contents of the stack in polynomial time? 12:42:54 I think so. it would be quadratic 12:43:09 first you change each a to ac and each b to bd 12:43:16 in one pass 12:43:24 and then you bubble-sort 12:43:29 then you do a bubble sort to bubble up the c and d 12:43:30 yes 12:43:32 right 12:43:45 should be quadratic even on two stacks 12:43:54 you just need a larger alphabet and enough states 12:43:55 this is possibly the first serious use of bubble sort I've seen :-D 12:44:17 actually I think this gives a general method for implementing two stacks with three 12:44:23 err, implementing three stacks with two 12:44:24 you need to encode in the state whether you need another pass of the sort 12:44:40 or in fact any number of stacks with two 12:44:58 use one stack that simply contains an arbitrary interleaving of all but one of the rest (using different symbols for the various stacks) 12:45:02 ais523: yes, I had some sort of question similar to that earlier 12:45:14 -!- imode has joined. 12:46:05 actually you don't even need a bubble sort for this, just move all the information onto your temp stack until you find an element of the stack you're looking for 12:46:13 delete it, encode it in the state, and move all the information back again 12:46:30 sure, then it's cubical time 12:46:31 this should also serve as a proof that you can keep one stack with entirely arbitrary information, because all the stacks you're using for computation are encoded on the other 12:46:59 this is only an O(n) slowdown, I think? accessing any element of any stack takes a length of time bounded by the total number of elements in all stacks 12:47:01 yes, that works 12:47:07 um 12:47:37 o(n) multiplicative factor sslowdown 12:47:58 so like O(n**2) slowdown 12:48:17 if the program was running in O(a) space and O(b) time, it now runs in O(a) space and O(ab) time 12:48:22 yeah 12:48:33 which is actually a fairly good complexity result 12:48:43 which is actually important, to track the space and time separately 12:48:46 given that space complexities are normally smaller than time complexities 12:48:50 exactly 12:49:05 (with some definitions they're always smaller because it takes time to fill all that space) 12:49:31 the space here is going to be only polylog of the time, because the space will only be used to keep a bounded number of addresses on the disk, as many addresses as the simulated machine has registers, plus a few temporaries 12:49:47 the simulated machine is sub-TC? 12:50:02 no, the simulated machine is TC 12:50:05 it's a cons heap machine 12:50:13 and I care about efficient runtime 12:50:15 that was the point 12:50:18 oh, the addresses get arbitrarily large 12:50:22 yes 12:50:38 but only logarithmically large, because they're in binary and I allocate them sequentially 12:50:50 I don't care about garbage collection 12:50:55 so yes, this runs with polylog slowdown I think 12:51:00 right 12:51:08 that's all I want to prove 12:53:14 this will prove that Consumer Society is efficient in theory, as efficient as, say, SKI calculus or whatever reasonable computational model with unbounded memory size, up to polylog slowdown 12:53:49 -!- trout has quit (Ping timeout: 265 seconds). 12:53:59 and I can separately prove that it's also sort of efficient in practice if you only need compile time bounded sized memory, although not really so efficient that you'd want to run real programs in it, just eso-efficient 12:54:27 and that part is easier to program by far, no need to swap symbols from an alphabet on two stacks or such crazy stuff all the time 12:54:36 well, I admit to being guilty of writing languages that run in double exponential time before now… 12:55:16 although the reason my TCness proof of 3* stalled out is that I wanted to make a more efficient version of the proof 12:55:20 Amycus implemented the naive way is much worse than double-exponential 12:55:23 involving a more efficient cyclic tag 12:55:27 with integers that is 12:55:33 that's why I describe how to implement it properly 12:56:24 there's no point in having complexity classes beyond double-exponential because they're all far too slow to even comprehend 12:56:28 let alone attempt to run programs in 12:56:43 (this includes double-exponential as one of the classes that's too slow to comprehend) 13:17:04 hmm 13:34:06 -!- tromp has joined. 13:38:30 -!- tromp has quit (Ping timeout: 256 seconds). 13:41:47 wob_jonas: it's surprisingly bothering to see someone say nothing for 15 minutes, say "hmm" once, then go silent for another 15 minutes 13:42:28 I was just thinking about this disk model a bit more 13:54:52 -!- ais523 has quit (Remote host closed the connection). 13:56:55 -!- ais523 has joined. 13:59:09 hmm 14:00:20 hello int-e 14:00:42 -!- ais523 has quit (Excess Flood). 14:01:57 -!- ais523 has joined. 14:03:25 -!- ais523 has quit (Remote host closed the connection). 14:03:38 -!- ais523 has joined. 14:26:37 -!- SopaXorzTaker has joined. 14:27:06 -!- tromp has joined. 14:31:56 -!- tromp has quit (Ping timeout: 265 seconds). 14:40:40 -!- S_Gautam has quit (Quit: Connection closed for inactivity). 14:52:14 -!- imode has quit (Ping timeout: 265 seconds). 15:03:40 `olist 1130 15:03:40 olist 1130: shachaf oerjan Sgeo FireFly boily nortti b_jonas 15:20:27 -!- tromp has joined. 15:24:49 -!- tromp has quit (Ping timeout: 248 seconds). 15:46:36 -!- wob_jonas has quit (Quit: http://www.kiwiirc.com/ - A hand crafted IRC client). 16:13:15 -!- tromp has joined. 16:18:04 -!- tromp has quit (Ping timeout: 260 seconds). 16:30:35 -!- sftp has joined. 16:38:48 -!- Decim_The_Dark has joined. 16:39:36 -!- Decim_The_Dark has changed nick to Decim_. 16:41:10 -!- Decim_ has left. 17:24:15 -!- ais523 has quit (Remote host closed the connection). 17:25:27 -!- ais523 has joined. 17:37:35 -!- ais523 has quit (Quit: quit). 17:46:06 -!- arseniiv has joined. 17:58:44 -!- tromp has joined. 18:02:41 -!- S_Gautam has joined. 18:03:58 -!- tromp has quit (Ping timeout: 268 seconds). 18:13:02 -!- Phantom_Hoover has joined. 18:23:40 -!- sebbu has quit (Ping timeout: 256 seconds). 18:27:57 -!- sebbu has joined. 18:28:52 -!- tromp has joined. 18:39:03 -!- tromp has quit (Remote host closed the connection). 18:40:02 [[Brainfuck in Python]] N https://esolangs.org/w/index.php?oldid=57062 * Kaa-kun * (+1212) Brainfuck in Python 18:41:30 [[Brainfuck in Python]] https://esolangs.org/w/index.php?diff=57063&oldid=57062 * Kaa-kun * (+124) Correction 18:49:03 [[Esolang talk:Community portal]] https://esolangs.org/w/index.php?diff=57064&oldid=55607 * Kaa-kun * (+432) /* Important message from Carbuncle */ new section 20:07:54 My high score so far in "Into the Blue" is 36690 points. Did you play that game? It is like a cross between Panel de Pon and 15 Puzzle. 20:12:54 -!- arseniiv has quit (Ping timeout: 256 seconds). 20:17:21 -!- SopaXorzTaker has quit (Quit: Leaving). 20:45:59 [[Special:Log/newusers]] create * Sylwester * New user account 20:57:27 -!- impomatic has quit (Ping timeout: 240 seconds). 20:59:41 -!- zzo38 has quit (Ping timeout: 255 seconds). 21:05:12 -!- zzo38 has joined. 21:41:07 -!- tromp has joined. 21:45:37 -!- tromp has quit (Ping timeout: 248 seconds). 22:16:16 -!- erkin has quit (Quit: Ouch! Got SIGIRL, dying...). 22:55:19 -!- imode has joined. 23:09:19 -!- MDude has quit (Ping timeout: 260 seconds). 23:58:11 -!- MDude has joined.