00:01:07 yes, cache is one of the reasons why you want to make context switches rare in first place, not more frequent, even if they're within one user space. but that doesn't answer your question I think. 00:01:45 I don't see how any stack arrangement really fixes that. 00:02:20 Obviously you can make it worse by a really bad scheme, but still. 00:03:46 Well, imagine you had no compiler support. 00:04:11 The straightforward thing you'd do is define a struct for all the asynchronous state you need, including an "instruction pointer". 00:04:39 We already have some amount of compiler support. 00:05:33 And cpu arch support too, for that matter. It took me some time to figure that out. 00:05:38 ? 00:07:35 shachaf: you know when x86_64 added the AVX instructions, thus extending 128-bit XMM registers to 256-bit YMM registers, right? The existing XMM registers are aliased to the bottom halves of the YMM registers. 00:07:45 wait, which do you want... coroutines or userspace threads? 00:08:09 They seem like the same kind of thing? 00:08:12 what's the difference? 00:08:18 how exactly do you define them 00:08:39 also the real question is, what you want, since you asked the question in first place 00:08:59 I want a convenient and efficient way to write code that does asynchronous operations. 00:09:36 shachaf: well, that's somewhat of a different applications then. so coroutines or user-space threads or whatever, but applied to asynchronious IO in particular? 00:10:15 Coroutines are invoked explicitly (for example, when you hand control off to a generator), which is amenable to static analysis (though obviously only to the extend that function pointers are amenable to static analysis). 00:10:41 (Cooperative) userspace threads are coroutines that are invoked by some scheduler. 00:11:47 So anyway, when x86_64 did that, they made it so that all existing instructions that modified an XMM register never changed the upper halves of the YMM register. This has the disadvantage that whenever you have libraries compiled to SSE2 and libraries compiled to AVX in the same userspace, and you call between them, then either you have zero all the upper halves, or the cpu has to deal with stashing 00:11:54 away the upper halves when it does YMM registers, 00:12:38 So coroutines often are coordinated in a predictable way. Threads usually aren't coordinated; they are scheduled but do unrelated work. 00:12:42 Let's imagine I don't care about XMM 00:12:45 -!- nchambers has quit (Quit: WeeChat 2.2). 00:13:06 or otherwise most SSE2 instructions would suddenly get an extra input, because they have to merge the old value of the output YMM register into the YMM register. The cpu allows both of these: the code explicitly zeroing the YMM registers or the cpu stashing the top halves. 00:13:12 But nevertheless, this seems inefficient. 00:13:34 shachaf: you don't care about XMM? but you want efficient code with compiler support? do you want to use a new cpu architecture too? 00:13:40 Now sure, you can build a scheduler on top of a coroutine mechanism, but any static analysis for coroutines is bound to fail. 00:14:03 So for a while I wondered why x86_64 was designed to work that way, instead of all SSE2 instructions just zeroing the top halves of any YMM register that they store into. 00:14:06 int-e: OK, then I'll say that I care about cooperative userspace threads. 00:14:30 Then I'll say that you're in for a lot of trouble. :P 00:15:03 But it turns out that that would have been bad, because you couldn't do user-space coroutine switching correctly, which would be a worse cost than the one we have to pay now. 00:15:38 amd64 registers are so complicated 00:16:02 shachaf: yes they are. but if you are asking for efficient code with compiler support, you have to deal with that. 00:16:33 For example storing into the lower 32-bits zeroes the upper 32-bits, but storing into the lower 16-bits doesn't zero the upper 16-bits 00:16:58 shachaf: yes, and that is for exactly the same reason as with the XMM registers 00:17:01 well, sort of 00:17:02 not quite 00:17:30 Anyway, what about the specific case of userspace threads ignoring SIMD registers for the moment? 00:17:44 back then in the 386, the extra input dependency didn't matter, so that was a sane design choice 00:17:56 today it's much more strange, and has a much more obscure reason 00:18:06 There's an obvious thing you can do, which is just allocate a stack for each thread and switch out rsp and rip (and the callee-save registers) on context switch. 00:18:11 shachaf: again, then you're losing more than you gain with userspace threads 00:18:36 um yes, except you have to save much more than that, specifically all the callee-saved registers 00:19:11 The callee-saved registers are not much more than the callee-save registers. 00:20:02 um huh? 00:20:13 oh 00:20:26 I mean, you have to save more than just rsp and rip, because there are more callee-saved registers than that 00:20:33 Hence the word "and" 00:20:54 There are six others in System V amd64 ABI 00:21:10 Saving a few registers is not really the big bottleneck here. 00:21:58 the sad part is that you usually have to save both floating point control registers (the x87 one and the SSE2 one), AND the signal mask (not technically a register, but you do have to save it), despite that most of the time they aren't actually changed in the program 00:22:14 but it's hard to make sure that no library linked into your program ever changes them, so you have to save them 00:22:25 You can just make sure that no library every changes them. 00:22:55 Changing the signal mask is ridiculous, you have to do a system call per context switch. 00:23:52 shachaf: yes, that's sad, but you have to do it. the good news is that system calls like that are very fast. 00:24:03 sometimes they can even execute purely in userspace. 00:24:14 without switching to kernel mode that is. by magic. 00:24:45 I don't know if the signal mask thing can do that, but the point is, it doesn't have to do IO or anything, it just has to change some flags and check some other flags for pending signals. 00:24:49 No, you don't have to do it, because you're writing a program and you know what your program is doing. 00:25:07 Ah words, my eternal nemeses. I could've saved a lot of typing by just writing that coroutines are much more tightly coupled than threads (userspace or otherwise). 00:25:35 shachaf: yes, see above what I said about how recompiling all the libraries you link in, including all the nasty parts of glibc, does help, but in practice it's harder than what you might expect by "compiler support" 00:26:23 and the signal mask is one you can get away the easiest, because the program has more control over it, but there's no way you can optimize away the two floating point control registers 00:26:35 Let me check the source code to see what else I forget that have to be saved 00:26:36 Maybe your program doesn't do floating point operations. 00:27:00 shachaf: maybe, but it's very unlikely that even all the libraries including libc never do any 00:27:09 it's pretty unlikely in fact 00:27:34 I don't think saving registers is the big bottleneck here anyway. 00:28:38 wait, there's some additional crazy stuff here I think 00:28:42 something that I forgot about 00:28:43 I think a much bigger problem with the system I described is that every time you do a stack witch, the entire stack is out of the cache. 00:29:28 shachaf: I don't see why it would be "the entire stack", and I don't see why it's the stack in particular that you care about 00:29:57 whenever you use more data than fits in your cache, most of the old data will be out of the cache 00:30:02 `grWp \bwitch 00:30:08 apt:APT is a technical term in cyber witchcraft, short for "adequate pernicious toe-rags". \ peace witch:Peace witches do alchemy: they turn mundane building material to gold. They're in the same universe where Bowser turned peaceful citizens of the Mushroom Kingdom to building material. 00:30:09 it doesn't matter much if it's stack or not 00:30:40 if you access very few data during when a coroutine executes, then the other data isn't out of the stack. if you use a lot, then it's out of the stack. 00:30:59 What do you mean? 00:31:03 sure, there are some fine details going on there because caches are complicated 00:31:12 but why is the stack in particular important? 00:31:35 Because you could do a stack switch and then call a function that uses a bunch of stack space but doesn't do any context switch. 00:31:46 And all that memory is not in the cache. 00:31:56 I have thought to instead design the instruction set without automatic cache control and instead you must specify your own controls 00:34:20 . o O ( there's an easy way out... just don't do a context switch. ) 00:34:24 zzo38: you know that already gets very hairy, but if you also do user-space coroutine switches, then it gets really horrible 00:34:30 int-e: yeah, that's what I said above. 00:34:55 Or do the transputer thing (3 registers, was it? sweet for context switching!) 00:35:26 int-e: OK, what's your alternative? 00:35:40 I'm not really tied to threads here, I'd like any sort of way to express these asynchronous things nicely. 00:37:49 I guess I'm not sure why you want to switch away out of the middle of a heavy computation in the first place; that sounds like a job for a proper OS thread to me. 00:38:24 A computation doesn't have to be very heavy to suffer from cache misses. 00:38:37 I'd like to measure this, admittedly. 00:38:51 I imagine you also have something event-driven where nothing ever takes long, which can all be handled by a single event-driven thread. 00:39:27 * int-e shrugs. 00:39:29 Sure, and then you'd make structs that correspond more or less to the contents of the stack for the threads, right? 00:39:49 YES! 00:39:54 If they carry state... yes. 00:40:01 sorry, that was unrelated to the couroutine thing 00:40:37 Sure, they carry state. They do a moderately complicated thing. 00:41:03 shachaf: and sometimes the compiler helps you with maintaining those structs implicitly 00:41:48 in the first stage by supporting closures; 00:42:41 and in the second stage by allowing the programmer to mark functions as "async", and those functions are compiled so that they or other async functions they call can have yield points where they save their state and IP into a struct and pick up from there when you continue them. 00:42:48 compilers can actually do both these days. 00:43:04 Sure, that's one of the schemes I'm describing. 00:43:24 the drawback is that you can't do a stackless yield through a function that isn't compiled async, even when it calls your function back, 00:43:41 but compiling every function that has callbacks async can cost a lot. 00:43:49 int-e: I wonder, because you have partially applied arguments, if it's possible to refactor that queue automata you had such that it doesn't do subterm copying. your state tables would balloon but I don't think it'd be _that_ much of an increase. 00:44:13 so user-space context switches are easier if you have unpredictable yields from deep inside functions, but generally you want to avoid that. 00:44:55 I think requiring the user to mark async functions is reasonable. 00:45:00 shachaf: yes, that's what you said, I just wanted to explicitly tell how it works 00:45:24 imode: I don't know what you mean by "partially applied arguments" but any kind of graph reduction requires a representation of references and I'm not going there on this level of non-abstraction. 00:45:27 shachaf: yes, but the user also needs to learn how to write code efficiently. that is, of course, ALWAYS the case when you want efficient code, because compilers can't just do magic. 00:45:47 Yes. The question is how you can help a user who knows what they're doing, of course. 00:46:23 both that, and what is it that the user should do exactly 00:46:29 int-e: `sk results in a unique object that represents the partial application of S to K. further applications produce new, unique objects. I'm asking if it's possible to strip away the states you use to copy subterms around. 00:46:44 imode: (the point of graph reduction is that one can share subterms which is what you seem to be suggesting) 00:46:51 not at all. 00:46:56 not interested in graphs. 00:47:06 imode: in my automaton `sk is just that. 00:47:42 imode: It's an application of s to k. There is nothing "partial" about it, it's just not a redex. 00:47:43 let me rephrase: I'm wondering if it's possible to eliminate #, %, & and @. 00:48:52 it is a partial application... S takes 3 arguments, you've converted it to something that takes 1 and applied it to a single argument. 00:49:01 rather, 1 at a time. 00:49:28 imode: Yeah, no, that's not the way I see it. ` takes 2 arguments; S and K take none. 00:50:53 imode: I'm viewing this as first-order term rewriting. Terms are built from s, k and `; no partial applications. There are rules ``k -> and ```s -> ```. I implement leftmost outermost reduction for these two rules. 00:51:01 imode: you know that you almost certainly need at least one data structure that has two child. in this case it's the S2 closure. you can choose different primitives, but you'd have to get REALLY far from combinatorial calculus before you get something with only list-like structures, no tree-like ones. 00:51:37 int-e: right, and in your execution, you're skipping/moving over subterms. I'm asking you if it's possible to remove those. 00:51:47 your state table would grow. 00:53:16 imode: I don't see how to do that with finitely many states. And copying is unavoidable if I want to implement ```s -> ``` without changing the term representation. 00:54:30 something tells me it's possible to do that with finitely many states, but _not_ generally: you'd need to generate the state tables for any particular term tree. 00:58:19 https://ptpb.pw/-OCf/text something like the following. I'm curious if it's poossible to avoid the explosion. 00:59:12 whup, one of those rules is wrong. 00:59:44 imode: the thing is, to skip over a subterm without using auxiliary symbols, you have to keep track of how many ` you have seen (since when you have to skip n subterms, after seeing a ` you have to skip n+1 subterms), and there is no bound on that number, even for a fixed starting term. 01:00:50 int-e: yes, there's no shortcut because you can't avoid a tree-like structure 01:00:55 int-e: correct. what I'm proposing isn't using a head-based evaluation method but running over the string using basic substitution rules. 01:00:57 you can't make it just lists nested to a limited depth 01:01:05 imode: oh now you have many more markers: (, S, K, ) 01:01:27 yeah but (SK) and such can be converted into single symbol markers. 01:02:11 just went for a walk earlier and thought of a possible reduction from SK to Thue. 01:02:13 imode: so now *you* have partial applications which I have avoided. 01:02:18 correct. 01:03:12 my thought is that there's a finite set of objects and a finite set of application rules that leads to an evaluator for combinatory logic. 01:03:41 I would like to understand the problem you are working on better 01:03:58 it is related to SK combinators implemented with string rewriting 01:04:04 yup. 01:04:17 and you use auxilliary characters to represent tape head, guide the evaluation and want to reduce them 01:04:22 imode: So... hmm. No, that won't end up being finite. I can produce a starting term that will evolve all off S, (SS), (SSS), (SSSS), ... for example. (It's not even hard if you know how to do abstraction elimination, since you can just iterate \x -> (x S) starting with S. 01:04:31 s/off/of/ 01:05:13 I realize that. there's an encoding I've been trying to study/mull over of SK encoded in terms of wang tiles. 01:05:13 imode: It's also completely contrary to what I intended to do... namely, find a sweet tradeoff between the number of states required, simplicity, and number of extra symbols. 01:05:24 you pasted code before with the # % stuff, can we see it again 01:05:24 think I linked it yesterday. 01:05:36 imode: do BCKW instead of SK! :-) 01:05:45 it doesn't really help, but still 01:05:54 heh. 01:06:10 http://ceur-ws.org/Vol-1032/paper-01.pdf 01:06:17 see https://esolangs.org/wiki/Combinatory_logic 01:06:28 rain1: if you're wondering about context, the #, %, &, @ refer to states in http://paste.debian.net/1057921/ 01:06:42 ah queue automaton 01:07:35 ok how about this idea: find a much simpler queue automaton which isalso turing complete and S,K can be macro expressed into in a simple way 01:07:49 perhaps that could reduce the rewrite system a bit 01:08:01 int-e: they give a sample reduction on page 9. 01:08:34 rain1: it it would be trivial to implement any particular bitwise cyclic tag program. 01:08:50 rain1: yes, you do that by making an interpreter for one of those low-powered languages that ais523 likes 01:08:52 which involves a finite amount of objects with a finite amount of rewrite rules. it's specific, however, to that particular subterm, I think. 01:09:05 such as counter machine stuff 01:09:05 yeah but it would be difficult to express SK into cyclic tag wouldn't it? 01:09:15 very restricted ones 01:09:18 could we find something a bit more powerful than cyclic tag that's still very minimal 01:09:20 rain1: yes, it would be 01:09:30 anyway just one idea 01:09:38 you can't just get around that 01:09:43 rain1: (meaninq BCT is trivial to compile to a queue automaton; in fact it would be fair to say that it *is* a queue automaton) 01:10:28 yeah. they encode subterms as specific objects, T0 through T7. 01:10:44 so you can generate a set of tilings for _that particular_ string of applications. 01:11:08 or perhaps one of those 1-dimensional cellular automata thingies that ais523 used to consider earlier could be better than a counter-machine based approach 01:11:20 you know, the really small universal 1-dimensional CAs 01:11:36 I suppose the fundamental difficulty arises from the fact you are processing a linear string, but it represents a tree 01:11:53 rain1: yes, and it's a provable performance hit too 01:11:54 so tree parsing is mixed up with evaluation - that is what you were discussing before I interrupted 01:12:03 b_jonas: you can generate a CA to evaluate a particular term. 01:12:05 I think I am up to speed now 01:12:28 and if you simulate the CA, you can also easily add various extension rules 01:12:37 as in, periodic background patterns 01:12:50 which sort of count as cheats for pure CAs, but not here 01:13:09 so, what's interesting about that is that you can generate a CA from a CL term that's an interpreter of CL terms. 01:13:36 oh that is interesting but I imagine the result will bea large CA 01:14:01 and it will operated on a SK representation of SK combinators, which introduces overhead 01:14:01 that honestly depends on your combinator basis. 01:14:04 imode: no. I don't think that's true. the interesting about it is that there are very small universal CAs, so you can emulate them easily in a cyclic queue machine. 01:14:12 we can switch to the X single combinator basis 01:14:20 b_jonas: what's not true? 01:14:30 imode: it's not true that you can easily generate a CA from a CL term 01:15:02 did you read the PDF I linked. 01:15:05 they give a schema for it. 01:15:07 no 01:15:18 http://ceur-ws.org/Vol-1032/paper-01.pdf page 6 through 9. 01:15:37 isn't that wang tiles rather than a CA 01:15:40 gives the general method for generating a tileset from a particular term tree within a combinator basis. 01:15:54 b_jonas: One of the things that a compiler can do that doesn't happen in a struct is liveness analysis. 01:16:02 rain1: a CA is just a wang tileset that discards its history. 01:16:16 If you only need some subset of the fields for each state, a compiler can figure that out automatically. 01:16:17 a wang tiling can be seen as a representation of the history of a CA. 01:16:37 it's trivial to map a 2D wang tiling to a 1D CA, and they give an encoding of CL as a 2D wang tiling. 01:17:41 shachaf: I don't really understand that. if the compiler has the support for compiling async (yieldable) functions in a way that they suspend to a struct, then it can also do liveness analysis on it 01:18:04 Yes, sure, I mean compared to a manual struct. 01:18:05 is there some difference between that case and the case when you do user-space context switches? 01:18:11 i think that is a limited view of wang tiles 01:18:15 but I get what you mean 01:18:26 I mean this is another reason you might want to have compiler support. 01:18:35 a 3D wang tiling generates a 2D CA, for example. 01:18:52 there are also mappings between 2D wang tilings and 2D CAs. 01:19:13 shachaf: not really. a programmer can also figure out which variables they changed, and store only those, or load only the ones they use 01:19:15 you just have a large number of states. 01:19:38 Sure, but then it's not really a struct anymore. 01:19:49 Certainly you can do it. 01:19:50 don't you agree there are wang tile sets that don't relate to CA? 01:20:01 shachaf: heck, even the compiler can figure out most of that from just the individual, what's it called, primitive block? whatever you call a section of the code with only one entry point and ends at yields 01:20:15 rain1: can you show me an example of one? 01:20:25 basic block 01:21:10 imode: wtf is that paper... 01:21:11 shachaf: um, I mean, if you always reuse the same struct for the storage of the same coroutine, mutating it in place 01:21:20 int-e: a wild ride. 01:21:27 which of course works that simple only if you don't have deep call stacks 01:21:29 but still 01:21:40 heck, even without that it still works 01:21:43 even in a non-mutating way 01:21:53 sorry, the in-place store isn't very relevant 01:21:57 imode: it starts going wrong when it tries to treat SKI as a monoid. That would require b = I b = K I a b = K a b = a 01:22:11 so no, I just don't get the point 01:22:36 (monoids are associative) 01:23:20 int-e: wait, a monoid with respect to composition? HAHAHAHA 01:24:09 I mean, you might have an operation that does steps A, then B, then C. 01:24:20 "Wang Tiles is a monoid, on Tiles as terms, with Wang-arrangement as the only operation." 01:24:24 imode: it's also cute how it builds full SKI reduction into the tiles ("Connection Tiles"). 01:24:56 You can write the state as struct { enum IP { Start, A, B, C, Done }; AState a; BState b; CState c; }; or something. 01:24:57 imode: and the claim that you end up with a finite set of tiles in the end... I don't see where this is proved (which is no surprise because it's necessarily false) 01:25:09 But in fact you only need enough memory for one of those. 01:25:26 (In this case you can just use a union. But in general you might share some state between some steps and so on.) 01:25:27 int-e: if you look at page 9, you can see an example breakdown: what they've done is map a particular term to a particular set of tiles. 01:25:37 shachaf: sure, you make it a tagged union instead. 01:25:45 possibly sharing some common state that is not often modified 01:25:53 such as the caller pointer 01:25:53 so you don't end up with a _general_ set of evaluation rules for SKI in wang tiles, but you have a schema for _converting_ arbitrary SKI terms to wang tiles. 01:26:12 imode: infinitely many of them, and SKI reduction is built into the tile set. 01:26:31 Sure, you can do all that, but liveness analysis does it automatically for you, is all I'm saying. 01:26:33 no, not infinitely many. 01:26:42 even if they don't actually treat it as a monoid (but I think they do) this is utter trash. 01:26:57 you're not getting what I'm saying: they build a tileset for evaluating _one_ specific SKI term. 01:27:00 not the whole space of them. 01:27:08 and there's a schema for generating those tilesets. 01:27:10 imode: if it has a normal form 01:27:16 shachaf: the scopes of variables in the code, before compiling to an async function, is mostly tree-like, that is, with any two variables, either the first lives longer or the second lives longer, the scope of one contains the other, 01:27:17 you do not need infinitely many tiles. 01:27:48 shachaf: and that implies that if you turn it to a struct, you always get a properly nested union-struct, with copies/moves necessary only when the orignal code copies/moves a value. 01:28:49 if this paper is bullshit then supercombinators are bullshit. :P 01:29:52 imode: For every reduction step, the full redex must be captured in a Wang tile (Fig. 2 is the only place where reductions happen), so all we need is a starting term with unbounded redex size. 01:30:10 imode: Really, this is nonsense. I'm sorry to have seen this paper. 01:30:25 I mean, write up a response to it with proofs. 01:30:55 I think hasty dismissal isn't a good way of doing things. 01:31:00 two-dimension might help to represent a tree structure, because you can put each child expression of the tree one row below the parent, so it's easier to skip the expression by just skipping through markers in the row above, without having to count parenthesis 01:31:01 oh 01:31:09 I see int-e's point 01:31:25 it would require infinitely many tiles 01:31:26 I mean, if you want to simulate evaluating combinatorial logic with a two-dimensional CA 01:31:30 but a wang tileset should be finite 01:31:32 of course with just wang tiles it won't help, 01:31:43 because with wag tiling you need the second dimension for time 01:31:48 (it may be parallel, but still) 01:33:15 bear in mind there are also other versions of this paper that go a little more in depth with actual proofs. 01:33:18 let me find one. 01:34:18 "22nd International Workshop on Concurrency, Specification and Programming" ... sad. 01:39:00 I can see that some wang tilings are like CAs 01:39:08 but I don't see that all wang tilings are CAs 01:39:13 I can't give an example 01:39:41 it seems like the constructed tilesets have been built with a uniqueness property where the next row/col can be determined by the current one 01:40:13 or would it be that each cell is determined by the 3 above it? 01:40:44 rain1: of course not. they're more powerful than CAs, because they can do time travel loops, which gets it PSPACE-complete or something. 01:40:57 thank you 01:41:03 that's the key i was missing 01:41:20 pardon me? 01:41:27 b_jonas: got a proof for that? lmao. 01:41:53 imode: well, they're actually uncomputable, because the space is unbounded 01:41:59 so PSPACE-complete is not really true 01:42:04 they're not limited to PSPACE 01:42:11 you know how they're uncomputable, right? 01:42:33 yes... very much aware. 01:42:38 imode: Wang tiles *can* implement string rewrite systems, using an approach like that. But it turns out that combinatory logic is not a string rewrite system, and you actually need to put in some effort (which the authors haven't done) to treat it by string rewriting) 01:42:42 you can just simulate any nondeterministic turing-machine with one tape 01:42:57 the key point is that it allows any nondeterminism 01:43:02 which is why you can go back in time 01:43:08 i don't like the way imode responsd 01:43:35 b_jonas: how does nondeterminism allow you to go backwards in time? 01:43:52 rain1: what's that supposed to mean? 01:43:56 imode: crucially, the objects of string rewriting (i.e., strings) do have a monoidal structure. 01:44:38 imode: you can send information back in time by nondeterministically guessing the information back in the past, sending that information to the future, then making the solution impossible in the future if the information you want to send back isn't the one that you got from the past 01:45:27 imode: usually it's told the other way, as in, time travel allows arbitrary nondeterminism, in the TCS sense, but it works backwards too 01:45:32 you've lost me, but I don't think that's how nondeterministic execution works. 01:45:55 imode: how does it work then? 01:45:55 imode: and also crucially, the redexes for any given SRS have bounded size. 01:47:00 b_jonas: Sometimes (often?) a variable is declared early and then stops being used in the middle of a block. 01:48:16 int-e: can you explain to me how they got their results, then? why is it that they were able to compile a particular term down to wang tilings? I don't quite buy the argument that the compiled terms are required to have normal forms. 01:48:43 imode: paper is incredibly patient. you can write anything you like. 01:48:45 b_jonas: you have multiple possible choices per computation steps, and thus multiple possible paths. 01:48:51 shachaf: yes. then it's not really strictly tree-like. but note that that only matters if there's a yield in between when it's stopped using and when another variable is started to use. it doesn't matter if you just compute a different new variable from the old one in one step without yield, which is very common. 01:48:58 -!- tromp has quit (Ping timeout: 244 seconds). 01:48:59 imode: there are no results. there are unsubstantiated claims. 01:49:00 imode: yes. that's the guessing part. 01:49:17 int-e: and yet they were able to show a particular reduction... 01:49:19 you guess by starting a possible choice for each possible value the information you're guessing can have. 01:49:25 b_jonas: Why doesn't it matter? 01:49:55 shachaf: because where there's no yield, you don't have to save or load the values to the state struct. 01:50:00 int-e: help me out here, what's the problem? can you show me a combinatory term that's not able to be translated (using this schema) to wang tiles? 01:50:09 Sure, in that case. 01:50:14 imode: yes, you can write down any finite reduction in that fashion. but it doesn't generalize to infinite reductions, and the system is not sound (it allows you to write down non-reductions that have wrong results, like the a = b chain above) 01:51:12 shachaf: yes, I admit that in some cases it's not strictly tree-like, and you will definitely get such a case when eg. you have a server that's handling a hundred requests asynchroniously and each can take different amount of waiting to another party. 01:51:35 one moment, gotta walk the dog. highlight me and I'll read back. 01:52:02 same here, sans dog 01:55:29 int-e: well, technically you may need a more complicated a=b chain than the one above, because if you don't have S in your expression, then you can always just restrict the rules to do evaluation steps only when the argument is an atom. you need something with parenthesis required in it. 01:55:44 but yeah, that if a is a parenthisized expression. 01:55:56 hmm no, I'm stupid 01:56:22 you do have parenthesis, where you write I b = K (I a) b = K I a b = K a b 01:56:29 it's the middle step there that's wrong 01:57:01 so yeah, I guess you might not need parenthesis 01:59:31 um 01:59:34 I guess you might not need S 01:59:38 you do need the parens 02:04:49 back. 02:06:12 imode: ignore what I said while I was gone, that's nonsense. listen to int-e, he's got it right. 02:06:32 so, here's my reasoning: if you look on page 9 of that paper, the list of "Subterms", they've taken a particular combinatory term and knocked it down to a particular set of objects which they map using a schema to a particular set of tiles, and the tiling of those tiles produces the result. even if you had something like the Y combinator written down, when applied to a particular combinator term, _you would 02:06:32 still_ be able to generate a finite amount of tiles, but your tiling would be infinite. 02:08:25 I agree that the paper is somewhat underspecified but I don't see the damning evidence that it's all trash. 02:09:46 I'm also not too keen on insulting people's work after a quick glance at it, but that's just my approach. 02:10:29 still trying to find that alternate version of said paper. 02:19:45 -!- nchambers has joined. 02:19:59 -!- nchambers has changed nick to uplime. 02:22:48 -!- john_metcalf has quit (Ping timeout: 272 seconds). 02:24:43 http://ceur-ws.org/Vol-1269/paper34.pdf found this one. it details the same stuff but also includes an implementation of a general CL interpreter within wang tiles by way of a turing machine encoding, similar to your's, int-e. 02:25:47 so to me it details two approaches: compiling CL terms to their corresponding tilesets, and interpreting an encoding of CL with a single tileset. 02:26:33 the latter is obvious. the former not so much, but I don't see why it isn't a possible thing. perhaps there even exists an implementation to go along with the paper. I'll try to track one down. 02:43:49 Hm, wang tiles correspond quite closely to 1-d nondet cellular automata. I'm not sure if I realized that yet or not. That actually explains why wang tiles are considered powerful, in the sense that they can do hard to predict things with just few different tiles. The tiles correspond to the states of the CA, the CA has that neighborhood where each cell depends on only two cells in the past (not three), 02:44:02 only the border conditions of Wang tiles are rather strange, very different from what we're used to with CAs. 02:44:43 I'm not sure if I had realized that before about Wang tiles or not. 02:45:18 I think I missed it because I learned about Wang tiles and then mostly forgot about it, and only later learned about how even deterministic 1D CA are so powerful. 02:45:57 yeah. there are multiple possible tilings, and it's actually pretty cool to watch a backtracking solver (starting from a seed tile) try to work out a tile by spiralling out. 02:46:29 it is possible to make fully deterministic tilings, though, which correspond to fully deterministic 1D CAs. 02:46:40 the specific literature eludes me at the moment. 02:47:53 imode: sure, because a nondet CA is strictly more powerful than a deterministic CA 02:48:05 that's usually the way we define nondeterminisitc variants of things 02:48:23 what? 02:48:33 that's not at all true. 02:48:47 um, example? 02:49:06 the fact that a nondeterministic TM can be simulated within a deterministic TM and vice versa. 02:49:11 the same goes for CAs. 02:49:24 no no 02:49:27 that's not what I'm saying 02:49:28 nondeterministic TMs are not more powerful than deterministic TMs. 02:49:39 I'm only saying one dimension 02:49:40 um 02:49:42 one direction 02:49:56 elaborate? 02:50:03 that the deterministic thingy can be translated to a nondeterministic thingy easilyi 02:50:06 not backwards 02:50:46 nondeterministic TMs can be pretty easily converted to deterministic TMs. 02:51:01 imode: no, that's backwards 02:51:52 how is that backwards. 02:53:57 imode: um, there's probably some terminology problem here 02:54:04 nondeterministic TMs can be converted to deterministic TMs via dovetailing, which isn't that hard of a thing to do. they can be seen simply as a class of machines that utilize the dovetailing technique. 02:54:38 I can sort of see what you mean if you say that nondeterministic TMs can be seen as taking deterministic TMs as a subset. 02:54:51 but I can invert that and say that nondeterministic TMs are really just a specific subclass of TMs. 02:55:02 s/subclass of TMs./subclass of deterministic TMs. 02:56:12 imode: um yes, but the dovetailing is still much harder in general than converting the other way, from a determinisitc TM to a nondeterministic TM, for which you don't need dovetailing, only some trivial transformation 02:56:48 the specific transformation of course depends on how exactly you define the TM, but then so does the other direction 02:56:58 (how you define TM, and how you define nondeterministic TM) 02:57:00 like I said, I can invert the relation and say that nondeterministic TMs are just classes of deterministic TMs in which you want to phrase parallel computations. 02:57:08 or pseudorandom choice. 02:57:16 between multiple branches. 02:57:39 it's more 1-D cellular automata than turing-machines that matter here, and the "dovetailing" part actually gets much easier there, mind you 02:57:48 because the CAs are naturally parallel 02:58:21 the hard part is just making sure that the memory of each simluated copy grows fast enough, which is what remains of the dovetailing 02:58:21 I'm not sure they are, but they _do_ lend themselves pretty naturally to parallel evaluation methods. 02:58:58 like I get what you're saying: nondeterministic TMs are harder to phrase as deterministic TMs because of the extra overhead. 02:59:16 but I can make the same argument that simulating a game of chess is hard on a deterministic TM rather than a gameboard with a couple of pieces and some parallel rewrite rules. 02:59:40 it's not a fault of the concept of deterministic TMs, but it's just a class of machines. 03:01:26 your choice of model defines what kinds of computations are easy and what kinds are hard. 03:05:01 imode: wait, are you saying that the paper that translates the combinatorial logic evaluation to Wang tiles uses such a transformation that can make the Wang tile pattern infinite in some significant way (not periodic, nor even some more complicated pattern that's easy to describe in a finite way), even if the evaluation tree of the combinatorial logic is finite? 03:05:51 not sure what'cha mean, could you rephrase that? having a hard time parsing stuff. 03:07:13 imode: um, if you have a set of Wang tiles, there are at least two questions you can ask. (1) one question is whether they can tile a rectangle with adjacent edges identified (eg. a flat torus), which is more or less the same as asking whether it tiles the plane in a periodic way; 03:07:31 (2) the other question is whether the tiles tile the plane, not necessarily in a periodic way. 03:09:49 ah. what I was really just saying is that, using their scheme, it's possible to generate a set of Wang tiles that correspond to terms and subterms (and applications of them) within a certain system of combinators. so small applications of combinators generate other small applications of combinators. these are always finite: one particular string of combinator applications generates another string of 03:09:49 combinator applications, and each string can be taken as a single object or multiple objects being applied to eachother. 03:09:51 these two questions are essentially different: the first one is computable with the wang tiles as input, the second one is not. 03:10:13 this is magic that depends on the periodic bounding combinations. whether the wang tiles tile a rectangle with all zero edges isn't computable. 03:10:19 if I understand correctly. 03:10:33 so I guess maybe there are three different questions, not two. 03:10:34 whether there exists tilings at all for certain tilesets is undecidable, yeah. 03:11:21 so, the key here is this: we can represent subterms like (SKK) as single terms. any particular combinatory term is guaranteed to generate some particular set of subterms, and we can consider this set finite and use it to build up other terms. 03:11:47 so my question is essentially, if the combinatorial logic expression has a finite evaluation, then what kind of tiling do you get from that, if you translate it using what the article does? a tiling of the torus, of the plane, of a rectangle with zero borders, or something else? 03:11:57 for any combinatory logic formula, we can generate a set of wang tiles from it such that the tiling of those Wang tiles matches up with the reduction of _that single particular combinatory logic formula_. 03:12:18 well, that's the thing. the tiling varies based on the expression you wanna evaluate. 03:12:22 and of course whether this transformation is one for which you don't need to compute the entire evaluation tree in advance. 03:12:28 because the tiles change from expression to expression. 03:12:38 (SKK)S has a different set of tiles, for example, than (SKK)K. 03:13:13 but the set of tiles is finite. and even if you include something like the Y combinator, or expressions that don't reduce to normal form, _you still have a finite set of tiles_. 03:13:48 imode: wait, so it's a "cheating" transformation that does the turing-complete step of evaluating the term in advance? 03:14:18 not exactly, no. unless you consider a C compiler reducing things down to assembly a cheating transformation. 03:14:35 that's essentially their method. 03:15:10 no reductions are being performed beforehand. 03:15:21 as in, do they basically take the infinite tileset of all possible combinators, or perhaps all possible combinators with a normal form or some such, and then if there's a finite evaluation, then obviously there's only finitely many combinators that appear in that evaluation, but you have to do the full evaluation first to determine that set. 03:15:29 ? 03:15:56 you don't have to do the full evaluation to determine the set from what I can see. 03:16:06 like, look at page 9. 03:16:17 page 9 of which pdf 03:17:00 sorry, let me relink you. 03:17:02 you mentioned at least two 03:17:07 thanks 03:17:20 http://ceur-ws.org/Vol-1032/paper-01.pdf 03:17:24 though I'll have to go to bed very soon, but we'll have the logs 03:17:24 this one. 03:17:29 gotcha. 03:17:43 so, they broke down a particular combinatory term, (((K(S(KK)(S(KK)I))a)b)c)d, into subterms. 03:17:56 much like you'd break down a particular textual term into a graph of subterms, right? 03:18:48 T6, for example, is KK, and there are two places in that formula where KK is located. 03:20:14 you can then show, by way of the schema on page 6 (which I am still studying on how to use properly), that each subterm corresponds to a set of introductions, folds, unfolds, connections (which propagate terms forward) and terminals (which represent the end of a computation). 03:21:21 imode: but that's one particular term. isn't this something that should apply to any combinatorial logic expression? 03:21:49 correct, but you need to generate a _new tileset_ for every expression you want to evaluate. 03:22:39 page 9 seems uninformative, most of it is a huge figure, but let me look at before that too 03:22:50 the _second_ version of the paper I linked shows a _general_ CL interpreter, which employs the use of a turing machine. 03:22:58 which can work for any CL expression. 03:23:58 http://ceur-ws.org/Vol-1269/paper34.pdf the second version. 03:24:31 so they effectively describe two approaches: one in which you compile a particular CL formula down to a set of wang tiles, and then use the tiling to compute the result of the reduction of that formula... 03:25:06 and the other in which you have to start with an interpreter of CL within Wang tiles encoded as a turing machine evaluating CL terms. 03:25:39 from pages 5 and 6, I think the error in the paper (or one significant error, at least) is exactly what int-e said 03:25:54 which would be? 03:26:08 yeah and it doesn't work because the reduction isn't prepared to deal with arguments that are longer than a single character. 03:26:17 ("it" being the Turing Machine) 03:26:51 not sure where you're getting that, int-e. 03:26:55 int-e: either that, or it is prepared to deal with it by creating new tiles for each reduction of combinatorial terms that ever occurs in the evaluation 03:26:57 These people don't know what they're doing at the theoretical end of their work. They *may* know something about DNA computing and Wang tiles though I'm skeptical about that. 03:27:14 y'know, why don't you try backing up your statements instead of needlessly bashing people. 03:27:25 if they're wrong, diagram it in clear wording. 03:27:38 imode: I'm looking at page 6 and there's no bracket handling in the reduction for K, I or S, except that S *inserts* brackets somewhere. 03:27:54 -!- uplime has quit (Ping timeout: 250 seconds). 03:28:15 they don't require bracket handling because they're not reducing a textual form of a CL expression. 03:28:31 imode: I'm suspicious about the DNA computing too. that's like the people who claim that soap bubbles or protein folding can efficiently solve NP problems, because *in practice* the possible results of protein folding or soap bubbles can be guessed well from a minimization problem 03:28:36 they've extracted common subterms (like you would using graph reduction). 03:28:36 um 03:28:38 int-e: ^ 03:28:50 imode: All you have to do is try running the TM on K(KI)K and see that it gets stuuck in state TK. 03:29:05 int-e: can you provide an example tileset which shows that? 03:29:21 they're not encoding a TM, by the by. 03:29:31 well, in the first paper, and in the first section of the second. 03:29:35 heck, int-e is probably right, those tiles do seem to explicitly assume that the application is associative, even if each of those colors can represent complicated terms 03:30:22 imode: (\f. f f) (\f x p. p x (f f (S x))) S will require an infinite tile set. have fun doing the abstraction elimination... 03:30:23 yeah, what he said 03:30:25 b_jonas: DNA computing in this regard builds Wang tiles out of DNA molecules. 03:30:34 I'm DONE. Good night. 03:30:37 and has actually been demonstrated. not a fan of the approach. 03:30:49 int-e: gooodnight. you've provided nothing. 03:31:03 o/ 03:31:25 imode: right, but the real DNA won't actually compute full non-deterministic problems, just like how the reil proteins won't magically fold into the optimal energy arrangements 03:31:46 good night 03:31:49 -!- b_jonas has quit (Quit: leaving). 03:32:34 lots of hostility with a lack of evidence. for a place where unconventional computing methods are accepted, there's quite a bit of hostility. 03:42:47 -!- Lord_of_Life has quit (Ping timeout: 240 seconds). 03:45:04 -!- Lord_of_Life has joined. 03:46:23 . o O ( must resist temptation to look at the pdf to find out if imode or int-e is right ) 03:50:55 -!- FreeFull has quit. 03:57:10 imode: just from viewing the discussion here, i suspect int-e is at the point where he considers the error so obvious that it feels like an insult to expect him to write out the counterexample in detail. 03:57:53 as in, he has a general idea why it _cannot_ work but it's a pain to write out. 03:58:08 then there's nothing more to discuss, as noted by him. 03:58:12 * imode shrugs. 03:59:09 To add insult to the injury: if you restrict the redexes of SKI calculus to a finite set then you get a ground term rewrite system, for which reachability is decidable; hence it's no longer Turing complete. 04:00:12 -!- uplime has joined. 04:00:15 if you're not going to go into any further detail I'm pretty much done as well. 04:00:27 no need to watch people trash other people's work. 04:00:58 -!- danieljabailey has joined. 04:01:19 But yes, I'm totally unwilling to work out the counterexample in any detail because Wang tiles are not the right level of abstraction to work on. I've given the high-level reason further above: Since the full redex is encoded in a Wang tile, and there are reductions for which there are infinitely many distinct redexes, a finite set cannot be enough. 04:01:52 a full redex _is not_ encoded in a wang tile. complexes and subredexes are. 04:02:20 You're wrong. ("redex" refers to the subterm that is headed by the K or S being contracted) 04:02:35 "You're wrong." "That's trash." 04:02:48 I mean, are you going to go into any kind of detail whatsoever. 04:03:15 Yeah. I'm angry at this point, because you have shown no evidence of actually doing any fucking work yourself. 04:03:23 -!- int-e has left. 04:03:49 how someone can get so emotional over something so irrelevant is beyond me. hope he's alright. 04:04:21 like, if I'm wrong, perfect, I didn't write the damn paper. 04:05:08 hell I'm even trying to work out a tileset for his counterexample, which seems like a reasonable approach. 04:05:19 hope he feels better when he comes back. 04:06:40 if he's right then evaluation should get stuck when one of those newly constructed redexes (that aren't represented in the tiles) has to be reduced. 04:08:24 I agree with him. I'm trying to follow the methods they used for constructing factorial near the bottom of the paper. 04:10:18 they give a series of tuples where each tuple corresponds to a wang tile. each side of a tile corresponds to a subterm, and from what I saw, subterms can be shared. I'm not sure how K(KI)K leads to a redex that can't be represented (or hasn't been generated) as a tile. 04:10:23 or as a series of tiles. 04:11:37 the K(KI)K example was meant for the TM construction, i suspect 04:11:50 ah. I don't really care about that bit. 04:12:14 it's not really interesting. the compilation bit, now _that's_ interesting. 04:13:02 I'll have to draw out the factorial tiles tomorrow. 04:13:34 they introduce extra convenience combinators related to pairs and naturals, but give no tiles that correspond to them. 04:14:27 ahhh, nevermind, those are schemes, not actual tilesets. 04:14:46 meaning you stick whatever numbers you want in the subscripts and out pop your tiles. _that_ makes sense. 04:14:58 i think the (\f. f f) (\f x p. p x (f f (S x))) S example was for the compilation thing, although i suspect p is unnecessary here... 04:15:16 so (\f. f f) (\f x. x (f f (S x))) S 04:15:48 is the gist just "generate an infinite series of applications like S(S(S..."? 04:16:16 or am I missing something. 04:16:44 because from what I can see, there's no issue in doing something like that. 04:16:54 well also to do it in such a way that the arguments passed to functions have to grow. 04:17:24 there's nothing in here that says that you have to represent things like that as a single tile: in fact, that's not the case at all. 04:17:29 so that they cannot be something that is already encoded as a single tile 04:18:08 correct: the structure of the terms is incidental, based on how the terms pair. you're not shoving entire trees into single tiles, otherwise you _would_ need infinite tiles. 04:18:28 but you _can_ extract commonly found subtrees and use them to form larger applications. 04:18:56 which if I'm drawing this correctly, the addition example does. 04:19:19 imode: ok, but can it reduce something like S x y z, where x,y and z are _not_ subterms of the original expression? 04:19:45 yes, because you can construct the subterms that _aren't_ in the original expression _from_ subterms in the original expression. 04:20:03 at least from what I can see. 04:20:58 like their tiles and reductions reflect subterm sharing. 04:21:04 imode: you can construct them, but does reduction work for them? 04:21:16 yeah, looks like it. otherwise naturals would break down almost instantly. 04:21:28 hm 04:22:18 http://ceur-ws.org/Vol-1032/paper-01.pdf trying to draw out and navigate page 13 of this PDF. 04:23:22 the footnote on that page details that n and m are essentially "pre-selected numbers", so fill in the naturals and generate your tiles from this schema. 05:11:50 imode: ok i browsed through some of the pdf. assuming you're not meant to "evaluate ahead" to find tile colors, as far as i can see, there is no way to reduce S x y z unless at least _one_ of S x y, z, or S x y z is a tile color. Constructing an SKI expression that eventually reduces S x y z where none of x, y or z occur in the original expression is left as an exercise. 05:17:11 or more generally, you cannot ever reduce an application if it is not equivalent to an original tile color or the application of a pair of tile colors. 05:17:54 that sounds about right to me. 05:17:57 which means you cannot, say, construct an infinite list of numbers. 05:18:39 (say, lazy list of factorials) 05:18:51 I'll have to try that. 05:21:12 that monoid thing also looks a bit grating - it seems to me that you could easily get into a situation where evaluation confuses a b (c d) with a (b c) d 05:21:39 I tried diving down the rabbit hole with that. 05:24:10 although i'm a little unsure whether the "sound computation grid" property is supposed to help with that 05:26:26 hm actually it _does_ seem to be intended to do that 05:27:48 essentially it enforces a kind of parenthesization by dividing up the grid into non-interacting parts. 05:29:56 (how they expect DNA to enforce that property i dunno ;P) 05:31:07 the DNA stuff is actually plausible: you can form small DNA tiles by forming two DNA strands in such a way that the base pairs encode the colors of the edges, and only bind to base pairs. 05:31:16 that ironically is the most plausible point of this, because it's been done. 05:33:03 yes, i mean the sub-grid property, not the individual tiles 05:33:08 oh. 05:33:17 sorry, reading comprehension failed me. 06:59:59 -!- oerjan has quit (Quit: Nite). 07:13:26 Now in this GURPS I can tell who is the shapeshifter because they have "X" on their forehead. Next, I have to stop them from writing "X" on everyone else's forehead too! 08:26:48 -!- tromp has joined. 10:02:03 -!- AnotherTest has joined. 10:06:34 -!- AnotherTest has quit (Ping timeout: 250 seconds). 10:16:56 after some careful consideration that paper was full of shit. 10:17:21 I tried working some of the tilings out and yeah, the people above me were right. 10:17:43 oh well! 10:33:08 -!- int-e has joined. 10:33:47 -!- tromp has quit (Remote host closed the connection). 10:33:59 -!- arseniiv has joined. 10:40:30 -!- tromp has joined. 10:58:39 imode: Three remarks about why I might core than I maybe should. a) this is essentially in my current area of research (term rewriting, tree automata (which feature in decidability results for ground term rewrite systems; http://tata.gforge.inria.fr/ may be the best available source on that topic)) b) I hate to see people wasting their time with wrong claims c) it becomes frustrating when I fail... 10:58:45 ...to convey that information efficiently, ending up wasting even more time rather than helping. 10:58:59 -!- AnotherTest has joined. 10:59:42 int-e: it's fine, you're good! I concluded independently that it's not a good paper. 11:00:25 there's some woeful underspecification or lack of understanding on their part. 11:02:58 -!- derpy_ has quit (Ping timeout: 245 seconds). 11:03:26 -!- derpy has joined. 11:03:42 -!- AnotherTest has quit (Ping timeout: 252 seconds). 11:16:20 -!- imode has quit (Ping timeout: 250 seconds). 11:49:22 -!- arseniiv has quit (Ping timeout: 246 seconds). 12:42:39 -!- b_jonas has joined. 12:43:19 int-e: it doesn't get stuck on (KI). there are tiles to open the parenthesis. anywhere, even in a left term. they really seem to believe that the application is associative. 12:44:11 int-e: also, not only do they add a color for every combinator calculus term, but also a tile for every combinator calculus reduction. not just one step reduction, but any reduction. so you can do any finite computation in one tile. 12:44:52 that latter part is a fixable bug, but I think it illustrates how the authors don't understand what they're talking about. 12:44:59 b_jonas: as oerjan suggested, that was for the Turing Machine in the second paper. 12:45:58 int-e: that may make a bit more sense, because a TM can actually be simluated as a 1-D CA in a more straightforward way than the, uh, monoid of combinator calculus 12:46:24 the "monoid" thing confused me because they don't explicitly state what operation it's a monoid over, but it does become clear by page 6 12:46:56 ("that" referring to the K(KI)) 12:48:32 int-e: and yes, I agree that you can't work out a proper counterexample, because the paper is vague enough without precise definitions and proofs that you can't point out more specifically than that where the error is 12:50:43 int-e: the part where they have single-tile reduction for every calculation is great, because if they actually implemented that, it would let them get the right result in a randomized imlementation that looks for random small tilings, as opposed to a true nondeterministic implementation, so the DNA computation would run trivially with just one transformation from the input to the result and a lot of 12:50:49 unused stuff 12:52:48 wtf, now even oerjan looked at it? why? 12:52:57 (still looking at the logs) 12:53:56 `? cdop 12:53:58 CDOP is OCPD, except with the letters in the *proper* order. 12:55:25 Put differently, he probably reached the point where he had to find out what all the ruckus was about. 12:56:29 int-e: also, I verified that for both of those disconnected polyminos, there's essentially one tiling that is invariant to translation (0,8) and translation (8,0), and I know the tilings I had back when I first tried this were invariant to those. that imples that the tilings you show on grid6.png are the ones I found. 12:57:32 int-e: these polyminos are significant because I think, but I'm not sure, that they're the only polyminos of 4 squares that can't tile the plane with just translations and horizontal and vertical mirrors and 180 deg rotations, you really need a 90 deg rotation or diagonal mirror 12:57:41 there was an article on that, let me check 13:20:51 -!- arseniiv has joined. 13:58:44 -!- arseniiv has quit (Ping timeout: 246 seconds). 14:08:51 -!- arseniiv has joined. 15:06:36 -!- uplime has quit (Ping timeout: 272 seconds). 15:13:17 -!- Essadon has joined. 15:14:01 -!- Essadon has quit (Max SendQ exceeded). 15:14:27 -!- Essadon has joined. 15:15:47 -!- b_jonas has quit (Quit: leaving). 15:18:28 -!- AnotherTest has joined. 15:23:04 -!- AnotherTest has quit (Ping timeout: 268 seconds). 15:35:36 -!- AnotherTest has joined. 15:42:30 -!- Lord_of_Life_ has joined. 15:45:00 -!- Lord_of_Life has quit (Ping timeout: 250 seconds). 15:45:04 -!- Lord_of_Life_ has changed nick to Lord_of_Life. 15:49:20 -!- AnotherTest has quit (Ping timeout: 246 seconds). 15:51:30 it was good 16:14:19 -!- AnotherTest has joined. 16:24:41 -!- AnotherTest has quit (Ping timeout: 246 seconds). 16:37:27 -!- tromp has quit (Ping timeout: 240 seconds). 17:14:18 -!- Sgeo_ has quit (Read error: Connection reset by peer). 17:14:44 -!- Sgeo_ has joined. 17:39:37 -!- AnotherTest has joined. 18:00:02 -!- S_Gautam has joined. 18:04:27 `olist 1150 18:04:28 olist 1150: shachaf oerjan Sgeo FireFly boily nortti b_jonas 18:13:11 -!- oerjan has joined. 18:33:08 -!- AnotherTest has quit (Ping timeout: 250 seconds). 18:33:43 -!- b_jonas has joined. 18:36:00 private fireworks are going strong 18:37:38 -!- tromp has joined. 18:38:13 -!- FreeFull has joined. 18:42:32 here too 18:42:59 @time 18:43:01 Local time for shachaf is Mon Dec 31 10:42:59 2018 18:44:47 hm xkcd's consensus new year was about a quarter hour ago 18:45:04 so happy new year, i guess! 18:57:33 it's not yet new year, but happy new year to you as well 18:57:34 b_jonas: that crap paper actually has a kind of solution to the monoid confusion in the "sound computation grid" section. however, the fact that every reduction result needs to be represented on a tile clearly blows it for any expression that constructs infinitely something like the lazy list of factorials i suggested. 18:57:59 oerjan: ok 18:58:27 b_jonas: see xkcd hth 19:01:01 and yeah, i had to see if the problem with the paper was as obvious as int-e implied. 19:01:08 oerjan: yes, I've seen it 19:01:13 (plus, i didn't like the animosity) 19:02:46 -!- imode has joined. 19:03:08 -!- AnotherTest has joined. 19:03:23 happy new year's eve. 19:03:32 hip hip 19:04:04 new year starts in less than four hours by the way 19:04:10 let's celebrate 19:15:40 is freenode doing that newyears thing again? 19:16:29 imode: I'm not sure what counts as a "that newyears thing" specifically. there's a #freenode-newyears linked from the topic of #freenode , but then, irc channels are cheap once you have a network up 19:16:43 imode: do you mean sending wallops? 19:16:49 or global notices or whatever? 19:17:12 the channel was what I was referring to. 19:20:56 -!- arseniiv_ has joined. 19:24:13 -!- arseniiv has quit (Ping timeout: 245 seconds). 19:29:55 what? 19:30:03 I'm confused now 19:30:22 it's a common human state hth 19:30:50 in NES Super Mario Bros, how many fireballs does it take to kill Bowser (without jumping on him)? I thought it took six, but https://www.youtube.com/watch?v=xJcAPGf0Z6A says five. 19:32:42 I may be mixing it up with GB Super Mario Land, in that one it takes six fireballs to kill most of the bosses (the castle bosses are immune, and the witch is possible to kill but very hard because she despawns immediately after a hit) 19:34:47 -!- AnotherTest has quit (Ping timeout: 240 seconds). 20:10:52 -!- uplime has joined. 20:14:06 -!- AnotherTest has joined. 20:14:25 in girl genius, maxim's nose seems to be growing lately. 20:22:24 Hmm private fireworks are illegal in the city this year... still hearing stuff though. People don't know or don't care. 20:22:42 -!- AnotherTest has quit (Ping timeout: 246 seconds). 20:23:20 int-e: they are legal here with strong restrictions, but most of how people use them are illegal and sometimes dangerous 20:23:23 so be careful 20:23:50 But I guess it's quieter than last year. 20:26:43 i think they're allowed in the suburbs here but not the city center 20:30:30 it's per muncipality, and I guess it's allowed outside of cities where it's not a fire hazard. 20:33:16 forbidden inside the red stippled line, allowed 18:00 - 2:00 outside https://www.tbrt.no/images/artikkelbilder/ForbudssoneforfyrverkeriiTrondheim.jpg 20:34:06 * oerjan isn't even on that map, but a bit further east 20:35:12 (there are municipal fireworks inside the zone though) 20:36:53 b_jonas: hah. "experts warn of illegal firecrackers" 20:37:46 I'm used to that warning. It hasn't changed in the past 30 years. 20:38:45 (and personally I don't even like the legal ones) 20:39:48 oerjan> hm xkcd's consensus new year was about a quarter hour ago => wow, pretty close to mine! (UTC+5) 20:41:26 -!- arseniiv_ has changed nick to arseniiv. 20:41:28 hmm, average time zones by person? so india and china have a huge influence? 20:41:40 int-e: median 20:41:46 (I had not seen the xkcd yet) 20:42:10 right, median makes more sense actually 20:46:29 -!- tromp has quit (Remote host closed the connection). 20:49:17 ah, not that close, it’s 6:30 UTC, not what I thought, so not that significant. Or contrary, more significant, as it’s further from the median and so rarer(?) 20:50:53 average would be a bad idea, it may be skewed by the few people who live in mediaeval theme parks that are in timezones several centuries before most of the earth 20:51:03 ok, maybe not that skewed, but still 20:55:04 The red line doesn't seems to enclose anything. 21:02:25 -!- 77FABDJ8V has joined. 21:09:40 zzo38: the fjord is not very flammable and so considered obviously not included hth 21:10:14 i suppose you could ask how far from shore a boat must be 21:15:03 Yes, so they should make the line enclosed so that you can know. 21:15:29 -!- 77FABDJ8V has quit (Remote host closed the connection). 21:34:12 "You must not set off fireworks between 11pm and 7am, except for: -- New Year’s Eve -- when the cut off is 1am" 21:35:16 There were a few pops when there was still daylight, which is a bit odd. 21:37:01 fizzie: sure. a lot of the illegal explosives go for the sound and blowing off stupid people's hands, not for the sight. 21:37:26 it's traditional for a few people to blow their own hands off every New Year 21:38:41 people are discouraged from that both because the kinds of explosives used for that are illegal and if you blow your hand off you're almost certainly discovered, and because spending New Year in the hospital is unpleasant, but people still do it 21:39:15 There's far more fireworks here on Bonfire Night (aka Guy Fawkes Night) than on New Year's. 21:47:01 Anyone in the +0200 timezone (Finland or Romania)? Should we do a small countdown then? 21:50:25 -!- tromp has joined. 21:51:39 Because if not, then I'll reserve the celebration for the +0100 and leave some for the +0000, -0500, -0800 ones. 21:51:56 Anyway, I'll prepare the New Year sausages. 21:54:52 -!- tromp has quit (Ping timeout: 246 seconds). 22:00:11 For anyone with +0100 timzone offset, happy New Year! 22:02:44 . o O ( i think you're off by one again ) 22:04:45 yeah darn +0200 22:07:22 . o O ( CET+1 ) 22:09:42 -!- S_Gautam has quit (Quit: Connection closed for inactivity). 22:13:42 -!- uplime has quit (Ping timeout: 250 seconds). 22:18:48 -!- tromp has joined. 22:57:47 -!- tromp has quit (Remote host closed the connection). 22:59:37 And now the New Year is coming up in our timezone offset, the actual +0100 (France, Germany, Poland, Norway, Hungary) 22:59:43 Just a few minutes. 22:59:50 And now the New Year is coming up in our timezone offset very soon, the actual +0100 (France, Germany, Poland, Norway, Hungary) 23:00:08 BOOM. 23:01:43 happy new year! 23:02:17 * int-e listens to money going up in smoke. 23:04:08 Happy New Year! 23:04:39 oh sirens 23:10:32 -!- sprocklem has joined. 23:17:04 -!- uplime has joined. 23:30:21 -!- tromp has joined. 23:34:37 -!- tromp has quit (Ping timeout: 246 seconds). 23:51:33 In the +0000 timezone offset (UK), New Year will start in about 9 minutes. 23:52:09 Not much action on the fireworks front. 23:56:09 There was more half an hour ago. Maybe everyone's watching the official ones. 23:56:23 (Sadly our windows are to the wrong direction.) 23:58:31 less than 90 seconds 23:58:49 fizzie: use the power of the internet to watch the official ones! 23:59:23 -!- tromp has joined.