00:01:07 <b_jonas> yes, cache is one of the reasons why you want to make context switches rare in first place, not more frequent, even if they're within one user space. but that doesn't answer your question I think.
00:01:45 <b_jonas> I don't see how any stack arrangement really fixes that.
00:02:20 <b_jonas> Obviously you can make it worse by a really bad scheme, but still.
00:03:46 <shachaf> Well, imagine you had no compiler support.
00:04:11 <shachaf> The straightforward thing you'd do is define a struct for all the asynchronous state you need, including an "instruction pointer".
00:04:39 <b_jonas> We already have some amount of compiler support.
00:05:33 <b_jonas> And cpu arch support too, for that matter. It took me some time to figure that out.
00:07:35 <b_jonas> shachaf: you know when x86_64 added the AVX instructions, thus extending 128-bit XMM registers to 256-bit YMM registers, right? The existing XMM registers are aliased to the bottom halves of the YMM registers.
00:07:45 <int-e> wait, which do you want... coroutines or userspace threads?
00:08:09 <shachaf> They seem like the same kind of thing?
00:08:18 <b_jonas> how exactly do you define them
00:08:39 <b_jonas> also the real question is, what you want, since you asked the question in first place
00:08:59 <shachaf> I want a convenient and efficient way to write code that does asynchronous operations.
00:09:36 <b_jonas> shachaf: well, that's somewhat of a different applications then. so coroutines or user-space threads or whatever, but applied to asynchronious IO in particular?
00:10:15 <int-e> Coroutines are invoked explicitly (for example, when you hand control off to a generator), which is amenable to static analysis (though obviously only to the extend that function pointers are amenable to static analysis).
00:10:41 <shachaf> (Cooperative) userspace threads are coroutines that are invoked by some scheduler.
00:11:47 <b_jonas> So anyway, when x86_64 did that, they made it so that all existing instructions that modified an XMM register never changed the upper halves of the YMM register. This has the disadvantage that whenever you have libraries compiled to SSE2 and libraries compiled to AVX in the same userspace, and you call between them, then either you have zero all the upper halves, or the cpu has to deal with stashing
00:11:54 <b_jonas> away the upper halves when it does YMM registers,
00:12:38 <int-e> So coroutines often are coordinated in a predictable way. Threads usually aren't coordinated; they are scheduled but do unrelated work.
00:12:42 <shachaf> Let's imagine I don't care about XMM
00:12:45 -!- nchambers has quit (Quit: WeeChat 2.2).
00:13:06 <b_jonas> or otherwise most SSE2 instructions would suddenly get an extra input, because they have to merge the old value of the output YMM register into the YMM register. The cpu allows both of these: the code explicitly zeroing the YMM registers or the cpu stashing the top halves.
00:13:12 <b_jonas> But nevertheless, this seems inefficient.
00:13:34 <b_jonas> shachaf: you don't care about XMM? but you want efficient code with compiler support? do you want to use a new cpu architecture too?
00:13:40 <int-e> Now sure, you can build a scheduler on top of a coroutine mechanism, but any static analysis for coroutines is bound to fail.
00:14:03 <b_jonas> So for a while I wondered why x86_64 was designed to work that way, instead of all SSE2 instructions just zeroing the top halves of any YMM register that they store into.
00:14:06 <shachaf> int-e: OK, then I'll say that I care about cooperative userspace threads.
00:14:30 <int-e> Then I'll say that you're in for a lot of trouble. :P
00:15:03 <b_jonas> But it turns out that that would have been bad, because you couldn't do user-space coroutine switching correctly, which would be a worse cost than the one we have to pay now.
00:15:38 <shachaf> amd64 registers are so complicated
00:16:02 <b_jonas> shachaf: yes they are. but if you are asking for efficient code with compiler support, you have to deal with that.
00:16:33 <shachaf> For example storing into the lower 32-bits zeroes the upper 32-bits, but storing into the lower 16-bits doesn't zero the upper 16-bits
00:16:58 <b_jonas> shachaf: yes, and that is for exactly the same reason as with the XMM registers
00:17:30 <shachaf> Anyway, what about the specific case of userspace threads ignoring SIMD registers for the moment?
00:17:44 <b_jonas> back then in the 386, the extra input dependency didn't matter, so that was a sane design choice
00:17:56 <b_jonas> today it's much more strange, and has a much more obscure reason
00:18:06 <shachaf> There's an obvious thing you can do, which is just allocate a stack for each thread and switch out rsp and rip (and the callee-save registers) on context switch.
00:18:11 <b_jonas> shachaf: again, then you're losing more than you gain with userspace threads
00:18:36 <b_jonas> um yes, except you have to save much more than that, specifically all the callee-saved registers
00:19:11 <shachaf> The callee-saved registers are not much more than the callee-save registers.
00:20:26 <b_jonas> I mean, you have to save more than just rsp and rip, because there are more callee-saved registers than that
00:20:54 <shachaf> There are six others in System V amd64 ABI
00:21:10 <shachaf> Saving a few registers is not really the big bottleneck here.
00:21:58 <b_jonas> the sad part is that you usually have to save both floating point control registers (the x87 one and the SSE2 one), AND the signal mask (not technically a register, but you do have to save it), despite that most of the time they aren't actually changed in the program
00:22:14 <b_jonas> but it's hard to make sure that no library linked into your program ever changes them, so you have to save them
00:22:25 <shachaf> You can just make sure that no library every changes them.
00:22:55 <shachaf> Changing the signal mask is ridiculous, you have to do a system call per context switch.
00:23:52 <b_jonas> shachaf: yes, that's sad, but you have to do it. the good news is that system calls like that are very fast.
00:24:03 <b_jonas> sometimes they can even execute purely in userspace.
00:24:14 <b_jonas> without switching to kernel mode that is. by magic.
00:24:45 <b_jonas> I don't know if the signal mask thing can do that, but the point is, it doesn't have to do IO or anything, it just has to change some flags and check some other flags for pending signals.
00:24:49 <shachaf> No, you don't have to do it, because you're writing a program and you know what your program is doing.
00:25:07 <int-e> Ah words, my eternal nemeses. I could've saved a lot of typing by just writing that coroutines are much more tightly coupled than threads (userspace or otherwise).
00:25:35 <b_jonas> shachaf: yes, see above what I said about how recompiling all the libraries you link in, including all the nasty parts of glibc, does help, but in practice it's harder than what you might expect by "compiler support"
00:26:23 <b_jonas> and the signal mask is one you can get away the easiest, because the program has more control over it, but there's no way you can optimize away the two floating point control registers
00:26:35 <b_jonas> Let me check the source code to see what else I forget that have to be saved
00:26:36 <shachaf> Maybe your program doesn't do floating point operations.
00:27:00 <b_jonas> shachaf: maybe, but it's very unlikely that even all the libraries including libc never do any
00:27:09 <b_jonas> it's pretty unlikely in fact
00:27:34 <shachaf> I don't think saving registers is the big bottleneck here anyway.
00:28:38 <b_jonas> wait, there's some additional crazy stuff here I think
00:28:42 <b_jonas> something that I forgot about
00:28:43 <shachaf> I think a much bigger problem with the system I described is that every time you do a stack witch, the entire stack is out of the cache.
00:29:28 <b_jonas> shachaf: I don't see why it would be "the entire stack", and I don't see why it's the stack in particular that you care about
00:29:57 <b_jonas> whenever you use more data than fits in your cache, most of the old data will be out of the cache
00:30:08 <HackEso> apt:APT is a technical term in cyber witchcraft, short for "adequate pernicious toe-rags". \ peace witch:Peace witches do alchemy: they turn mundane building material to gold. They're in the same universe where Bowser turned peaceful citizens of the Mushroom Kingdom to building material.
00:30:09 <b_jonas> it doesn't matter much if it's stack or not
00:30:40 <b_jonas> if you access very few data during when a coroutine executes, then the other data isn't out of the stack. if you use a lot, then it's out of the stack.
00:31:03 <b_jonas> sure, there are some fine details going on there because caches are complicated
00:31:12 <b_jonas> but why is the stack in particular important?
00:31:35 <shachaf> Because you could do a stack switch and then call a function that uses a bunch of stack space but doesn't do any context switch.
00:31:46 <shachaf> And all that memory is not in the cache.
00:31:56 <zzo38> I have thought to instead design the instruction set without automatic cache control and instead you must specify your own controls
00:34:20 <int-e> . o O ( there's an easy way out... just don't do a context switch. )
00:34:24 <b_jonas> zzo38: you know that already gets very hairy, but if you also do user-space coroutine switches, then it gets really horrible
00:34:30 <b_jonas> int-e: yeah, that's what I said above.
00:34:55 <int-e> Or do the transputer thing (3 registers, was it? sweet for context switching!)
00:35:26 <shachaf> int-e: OK, what's your alternative?
00:35:40 <shachaf> I'm not really tied to threads here, I'd like any sort of way to express these asynchronous things nicely.
00:37:49 <int-e> I guess I'm not sure why you want to switch away out of the middle of a heavy computation in the first place; that sounds like a job for a proper OS thread to me.
00:38:24 <shachaf> A computation doesn't have to be very heavy to suffer from cache misses.
00:38:37 <shachaf> I'd like to measure this, admittedly.
00:38:51 <int-e> I imagine you also have something event-driven where nothing ever takes long, which can all be handled by a single event-driven thread.
00:39:29 <shachaf> Sure, and then you'd make structs that correspond more or less to the contents of the stack for the threads, right?
00:39:54 <int-e> If they carry state... yes.
00:40:01 <b_jonas> sorry, that was unrelated to the couroutine thing
00:40:37 <shachaf> Sure, they carry state. They do a moderately complicated thing.
00:41:03 <b_jonas> shachaf: and sometimes the compiler helps you with maintaining those structs implicitly
00:41:48 <b_jonas> in the first stage by supporting closures;
00:42:41 <b_jonas> and in the second stage by allowing the programmer to mark functions as "async", and those functions are compiled so that they or other async functions they call can have yield points where they save their state and IP into a struct and pick up from there when you continue them.
00:42:48 <b_jonas> compilers can actually do both these days.
00:43:04 <shachaf> Sure, that's one of the schemes I'm describing.
00:43:24 <b_jonas> the drawback is that you can't do a stackless yield through a function that isn't compiled async, even when it calls your function back,
00:43:41 <b_jonas> but compiling every function that has callbacks async can cost a lot.
00:43:49 <imode> int-e: I wonder, because you have partially applied arguments, if it's possible to refactor that queue automata you had such that it doesn't do subterm copying. your state tables would balloon but I don't think it'd be _that_ much of an increase.
00:44:13 <b_jonas> so user-space context switches are easier if you have unpredictable yields from deep inside functions, but generally you want to avoid that.
00:44:55 <shachaf> I think requiring the user to mark async functions is reasonable.
00:45:00 <b_jonas> shachaf: yes, that's what you said, I just wanted to explicitly tell how it works
00:45:24 <int-e> imode: I don't know what you mean by "partially applied arguments" but any kind of graph reduction requires a representation of references and I'm not going there on this level of non-abstraction.
00:45:27 <b_jonas> shachaf: yes, but the user also needs to learn how to write code efficiently. that is, of course, ALWAYS the case when you want efficient code, because compilers can't just do magic.
00:45:47 <shachaf> Yes. The question is how you can help a user who knows what they're doing, of course.
00:46:23 <b_jonas> both that, and what is it that the user should do exactly
00:46:29 <imode> int-e: `sk results in a unique object that represents the partial application of S to K. further applications produce new, unique objects. I'm asking if it's possible to strip away the states you use to copy subterms around.
00:46:44 <int-e> imode: (the point of graph reduction is that one can share subterms which is what you seem to be suggesting)
00:46:56 <imode> not interested in graphs.
00:47:06 <int-e> imode: in my automaton `sk is just that.
00:47:42 <int-e> imode: It's an application of s to k. There is nothing "partial" about it, it's just not a redex.
00:47:43 <imode> let me rephrase: I'm wondering if it's possible to eliminate #, %, & and @.
00:48:52 <imode> it is a partial application... S takes 3 arguments, you've converted it to something that takes 1 and applied it to a single argument.
00:49:01 <imode> rather, 1 at a time.
00:49:28 <int-e> imode: Yeah, no, that's not the way I see it. ` takes 2 arguments; S and K take none.
00:50:53 <int-e> imode: I'm viewing this as first-order term rewriting. Terms are built from s, k and `; no partial applications. There are rules ``k<X><Y> -> <X> and ```s<X><Y><Z> -> ``<X><Z>`<Y><Z>. I implement leftmost outermost reduction for these two rules.
00:51:01 <b_jonas> imode: you know that you almost certainly need at least one data structure that has two child. in this case it's the S2 closure. you can choose different primitives, but you'd have to get REALLY far from combinatorial calculus before you get something with only list-like structures, no tree-like ones.
00:51:37 <imode> int-e: right, and in your execution, you're skipping/moving over subterms. I'm asking you if it's possible to remove those.
00:51:47 <imode> your state table would grow.
00:53:16 <int-e> imode: I don't see how to do that with finitely many states. And copying is unavoidable if I want to implement ```s<X><Y><Z> -> ``<X><Z>`<Y><Z> without changing the term representation.
00:54:30 <imode> something tells me it's possible to do that with finitely many states, but _not_ generally: you'd need to generate the state tables for any particular term tree.
00:58:19 <imode> https://ptpb.pw/-OCf/text something like the following. I'm curious if it's poossible to avoid the explosion.
00:59:12 <imode> whup, one of those rules is wrong.
00:59:44 <int-e> imode: the thing is, to skip over a subterm without using auxiliary symbols, you have to keep track of how many ` you have seen (since when you have to skip n subterms, after seeing a ` you have to skip n+1 subterms), and there is no bound on that number, even for a fixed starting term.
01:00:50 <b_jonas> int-e: yes, there's no shortcut because you can't avoid a tree-like structure
01:00:55 <imode> int-e: correct. what I'm proposing isn't using a head-based evaluation method but running over the string using basic substitution rules.
01:00:57 <b_jonas> you can't make it just lists nested to a limited depth
01:01:05 <int-e> imode: oh now you have many more markers: (, S, K, )
01:01:27 <imode> yeah but (SK) and such can be converted into single symbol markers.
01:02:11 <imode> just went for a walk earlier and thought of a possible reduction from SK to Thue.
01:02:13 <int-e> imode: so now *you* have partial applications which I have avoided.
01:03:12 <imode> my thought is that there's a finite set of objects and a finite set of application rules that leads to an evaluator for combinatory logic.
01:03:41 <rain1> I would like to understand the problem you are working on better
01:03:58 <rain1> it is related to SK combinators implemented with string rewriting
01:04:17 <rain1> and you use auxilliary characters to represent tape head, guide the evaluation and want to reduce them
01:04:22 <int-e> imode: So... hmm. No, that won't end up being finite. I can produce a starting term that will evolve all off S, (SS), (SSS), (SSSS), ... for example. (It's not even hard if you know how to do abstraction elimination, since you can just iterate \x -> (x S) starting with S.
01:05:13 <imode> I realize that. there's an encoding I've been trying to study/mull over of SK encoded in terms of wang tiles.
01:05:13 <int-e> imode: It's also completely contrary to what I intended to do... namely, find a sweet tradeoff between the number of states required, simplicity, and number of extra symbols.
01:05:24 <rain1> you pasted code before with the # % stuff, can we see it again
01:05:24 <imode> think I linked it yesterday.
01:05:36 <b_jonas> imode: do BCKW instead of SK! :-)
01:05:45 <b_jonas> it doesn't really help, but still
01:06:10 <imode> http://ceur-ws.org/Vol-1032/paper-01.pdf
01:06:17 <b_jonas> see https://esolangs.org/wiki/Combinatory_logic
01:06:28 <int-e> rain1: if you're wondering about context, the #, %, &, @ refer to states in http://paste.debian.net/1057921/
01:06:42 <rain1> ah queue automaton
01:07:35 <rain1> ok how about this idea: find a much simpler queue automaton which isalso turing complete and S,K can be macro expressed into in a simple way
01:07:49 <rain1> perhaps that could reduce the rewrite system a bit
01:08:01 <imode> int-e: they give a sample reduction on page 9.
01:08:34 <int-e> rain1: it it would be trivial to implement any particular bitwise cyclic tag program.
01:08:50 <b_jonas> rain1: yes, you do that by making an interpreter for one of those low-powered languages that ais523 likes
01:08:52 <imode> which involves a finite amount of objects with a finite amount of rewrite rules. it's specific, however, to that particular subterm, I think.
01:09:05 <b_jonas> such as counter machine stuff
01:09:05 <rain1> yeah but it would be difficult to express SK into cyclic tag wouldn't it?
01:09:18 <rain1> could we find something a bit more powerful than cyclic tag that's still very minimal
01:09:30 <rain1> anyway just one idea
01:09:38 <b_jonas> you can't just get around that
01:09:43 <int-e> rain1: (meaninq BCT is trivial to compile to a queue automaton; in fact it would be fair to say that it *is* a queue automaton)
01:10:28 <imode> yeah. they encode subterms as specific objects, T0 through T7.
01:10:44 <imode> so you can generate a set of tilings for _that particular_ string of applications.
01:11:08 <b_jonas> or perhaps one of those 1-dimensional cellular automata thingies that ais523 used to consider earlier could be better than a counter-machine based approach
01:11:20 <b_jonas> you know, the really small universal 1-dimensional CAs
01:11:36 <rain1> I suppose the fundamental difficulty arises from the fact you are processing a linear string, but it represents a tree
01:11:53 <b_jonas> rain1: yes, and it's a provable performance hit too
01:11:54 <rain1> so tree parsing is mixed up with evaluation - that is what you were discussing before I interrupted
01:12:03 <imode> b_jonas: you can generate a CA to evaluate a particular term.
01:12:05 <rain1> I think I am up to speed now
01:12:28 <b_jonas> and if you simulate the CA, you can also easily add various extension rules
01:12:37 <b_jonas> as in, periodic background patterns
01:12:50 <b_jonas> which sort of count as cheats for pure CAs, but not here
01:13:09 <imode> so, what's interesting about that is that you can generate a CA from a CL term that's an interpreter of CL terms.
01:13:36 <rain1> oh that is interesting but I imagine the result will bea large CA
01:14:01 <rain1> and it will operated on a SK representation of SK combinators, which introduces overhead
01:14:01 <imode> that honestly depends on your combinator basis.
01:14:04 <b_jonas> imode: no. I don't think that's true. the interesting about it is that there are very small universal CAs, so you can emulate them easily in a cyclic queue machine.
01:14:12 <rain1> we can switch to the X single combinator basis
01:14:20 <imode> b_jonas: what's not true?
01:14:30 <b_jonas> imode: it's not true that you can easily generate a CA from a CL term
01:15:02 <imode> did you read the PDF I linked.
01:15:05 <imode> they give a schema for it.
01:15:18 <imode> http://ceur-ws.org/Vol-1032/paper-01.pdf page 6 through 9.
01:15:37 <rain1> isn't that wang tiles rather than a CA
01:15:40 <imode> gives the general method for generating a tileset from a particular term tree within a combinator basis.
01:15:54 <shachaf> b_jonas: One of the things that a compiler can do that doesn't happen in a struct is liveness analysis.
01:16:02 <imode> rain1: a CA is just a wang tileset that discards its history.
01:16:16 <shachaf> If you only need some subset of the fields for each state, a compiler can figure that out automatically.
01:16:17 <imode> a wang tiling can be seen as a representation of the history of a CA.
01:16:37 <imode> it's trivial to map a 2D wang tiling to a 1D CA, and they give an encoding of CL as a 2D wang tiling.
01:17:41 <b_jonas> shachaf: I don't really understand that. if the compiler has the support for compiling async (yieldable) functions in a way that they suspend to a struct, then it can also do liveness analysis on it
01:18:04 <shachaf> Yes, sure, I mean compared to a manual struct.
01:18:05 <b_jonas> is there some difference between that case and the case when you do user-space context switches?
01:18:11 <rain1> i think that is a limited view of wang tiles
01:18:15 <rain1> but I get what you mean
01:18:26 <shachaf> I mean this is another reason you might want to have compiler support.
01:18:35 <imode> a 3D wang tiling generates a 2D CA, for example.
01:18:52 <imode> there are also mappings between 2D wang tilings and 2D CAs.
01:19:13 <b_jonas> shachaf: not really. a programmer can also figure out which variables they changed, and store only those, or load only the ones they use
01:19:15 <imode> you just have a large number of states.
01:19:38 <shachaf> Sure, but then it's not really a struct anymore.
01:19:50 <rain1> don't you agree there are wang tile sets that don't relate to CA?
01:20:01 <b_jonas> shachaf: heck, even the compiler can figure out most of that from just the individual, what's it called, primitive block? whatever you call a section of the code with only one entry point and ends at yields
01:20:15 <imode> rain1: can you show me an example of one?
01:21:10 <int-e> imode: wtf is that paper...
01:21:11 <b_jonas> shachaf: um, I mean, if you always reuse the same struct for the storage of the same coroutine, mutating it in place
01:21:20 <imode> int-e: a wild ride.
01:21:27 <b_jonas> which of course works that simple only if you don't have deep call stacks
01:21:40 <b_jonas> heck, even without that it still works
01:21:43 <b_jonas> even in a non-mutating way
01:21:53 <b_jonas> sorry, the in-place store isn't very relevant
01:21:57 <int-e> imode: it starts going wrong when it tries to treat SKI as a monoid. That would require b = I b = K I a b = K a b = a
01:22:11 <b_jonas> so no, I just don't get the point
01:22:36 <int-e> (monoids are associative)
01:23:20 <b_jonas> int-e: wait, a monoid with respect to composition? HAHAHAHA
01:24:09 <shachaf> I mean, you might have an operation that does steps A, then B, then C.
01:24:20 <imode> "Wang Tiles is a monoid, on Tiles as terms, with Wang-arrangement as the only operation."
01:24:24 <int-e> imode: it's also cute how it builds full SKI reduction into the tiles ("Connection Tiles").
01:24:56 <shachaf> You can write the state as struct { enum IP { Start, A, B, C, Done }; AState a; BState b; CState c; }; or something.
01:24:57 <int-e> imode: and the claim that you end up with a finite set of tiles in the end... I don't see where this is proved (which is no surprise because it's necessarily false)
01:25:09 <shachaf> But in fact you only need enough memory for one of those.
01:25:26 <shachaf> (In this case you can just use a union. But in general you might share some state between some steps and so on.)
01:25:27 <imode> int-e: if you look at page 9, you can see an example breakdown: what they've done is map a particular term to a particular set of tiles.
01:25:37 <b_jonas> shachaf: sure, you make it a tagged union instead.
01:25:45 <b_jonas> possibly sharing some common state that is not often modified
01:25:53 <b_jonas> such as the caller pointer
01:25:53 <imode> so you don't end up with a _general_ set of evaluation rules for SKI in wang tiles, but you have a schema for _converting_ arbitrary SKI terms to wang tiles.
01:26:12 <int-e> imode: infinitely many of them, and SKI reduction is built into the tile set.
01:26:31 <shachaf> Sure, you can do all that, but liveness analysis does it automatically for you, is all I'm saying.
01:26:33 <imode> no, not infinitely many.
01:26:42 <int-e> even if they don't actually treat it as a monoid (but I think they do) this is utter trash.
01:26:57 <imode> you're not getting what I'm saying: they build a tileset for evaluating _one_ specific SKI term.
01:27:00 <imode> not the whole space of them.
01:27:08 <imode> and there's a schema for generating those tilesets.
01:27:10 <int-e> imode: if it has a normal form
01:27:16 <b_jonas> shachaf: the scopes of variables in the code, before compiling to an async function, is mostly tree-like, that is, with any two variables, either the first lives longer or the second lives longer, the scope of one contains the other,
01:27:17 <imode> you do not need infinitely many tiles.
01:27:48 <b_jonas> shachaf: and that implies that if you turn it to a struct, you always get a properly nested union-struct, with copies/moves necessary only when the orignal code copies/moves a value.
01:28:49 <imode> if this paper is bullshit then supercombinators are bullshit. :P
01:29:52 <int-e> imode: For every reduction step, the full redex must be captured in a Wang tile (Fig. 2 is the only place where reductions happen), so all we need is a starting term with unbounded redex size.
01:30:10 <int-e> imode: Really, this is nonsense. I'm sorry to have seen this paper.
01:30:25 <imode> I mean, write up a response to it with proofs.
01:30:55 <imode> I think hasty dismissal isn't a good way of doing things.
01:31:00 <b_jonas> two-dimension might help to represent a tree structure, because you can put each child expression of the tree one row below the parent, so it's easier to skip the expression by just skipping through markers in the row above, without having to count parenthesis
01:31:09 <rain1> I see int-e's point
01:31:25 <rain1> it would require infinitely many tiles
01:31:26 <b_jonas> I mean, if you want to simulate evaluating combinatorial logic with a two-dimensional CA
01:31:30 <rain1> but a wang tileset should be finite
01:31:32 <b_jonas> of course with just wang tiles it won't help,
01:31:43 <b_jonas> because with wag tiling you need the second dimension for time
01:31:48 <b_jonas> (it may be parallel, but still)
01:33:15 <imode> bear in mind there are also other versions of this paper that go a little more in depth with actual proofs.
01:34:18 <int-e> "22nd International Workshop on Concurrency, Specification and Programming" ... sad.
01:39:00 <rain1> I can see that some wang tilings are like CAs
01:39:08 <rain1> but I don't see that all wang tilings are CAs
01:39:13 <rain1> I can't give an example
01:39:41 <rain1> it seems like the constructed tilesets have been built with a uniqueness property where the next row/col can be determined by the current one
01:40:13 <rain1> or would it be that each cell is determined by the 3 above it?
01:40:44 <b_jonas> rain1: of course not. they're more powerful than CAs, because they can do time travel loops, which gets it PSPACE-complete or something.
01:41:03 <rain1> that's the key i was missing
01:41:27 <imode> b_jonas: got a proof for that? lmao.
01:41:53 <b_jonas> imode: well, they're actually uncomputable, because the space is unbounded
01:41:59 <b_jonas> so PSPACE-complete is not really true
01:42:04 <b_jonas> they're not limited to PSPACE
01:42:11 <b_jonas> you know how they're uncomputable, right?
01:42:33 <imode> yes... very much aware.
01:42:38 <int-e> imode: Wang tiles *can* implement string rewrite systems, using an approach like that. But it turns out that combinatory logic is not a string rewrite system, and you actually need to put in some effort (which the authors haven't done) to treat it by string rewriting)
01:42:42 <b_jonas> you can just simulate any nondeterministic turing-machine with one tape
01:42:57 <b_jonas> the key point is that it allows any nondeterminism
01:43:02 <b_jonas> which is why you can go back in time
01:43:08 <rain1> i don't like the way imode responsd
01:43:35 <imode> b_jonas: how does nondeterminism allow you to go backwards in time?
01:43:52 <imode> rain1: what's that supposed to mean?
01:43:56 <int-e> imode: crucially, the objects of string rewriting (i.e., strings) do have a monoidal structure.
01:44:38 <b_jonas> imode: you can send information back in time by nondeterministically guessing the information back in the past, sending that information to the future, then making the solution impossible in the future if the information you want to send back isn't the one that you got from the past
01:45:27 <b_jonas> imode: usually it's told the other way, as in, time travel allows arbitrary nondeterminism, in the TCS sense, but it works backwards too
01:45:32 <imode> you've lost me, but I don't think that's how nondeterministic execution works.
01:45:55 <b_jonas> imode: how does it work then?
01:45:55 <int-e> imode: and also crucially, the redexes for any given SRS have bounded size.
01:47:00 <shachaf> b_jonas: Sometimes (often?) a variable is declared early and then stops being used in the middle of a block.
01:48:16 <imode> int-e: can you explain to me how they got their results, then? why is it that they were able to compile a particular term down to wang tilings? I don't quite buy the argument that the compiled terms are required to have normal forms.
01:48:43 <int-e> imode: paper is incredibly patient. you can write anything you like.
01:48:45 <imode> b_jonas: you have multiple possible choices per computation steps, and thus multiple possible paths.
01:48:51 <b_jonas> shachaf: yes. then it's not really strictly tree-like. but note that that only matters if there's a yield in between when it's stopped using and when another variable is started to use. it doesn't matter if you just compute a different new variable from the old one in one step without yield, which is very common.
01:48:58 -!- tromp has quit (Ping timeout: 244 seconds).
01:48:59 <int-e> imode: there are no results. there are unsubstantiated claims.
01:49:00 <b_jonas> imode: yes. that's the guessing part.
01:49:17 <imode> int-e: and yet they were able to show a particular reduction...
01:49:19 <b_jonas> you guess by starting a possible choice for each possible value the information you're guessing can have.
01:49:25 <shachaf> b_jonas: Why doesn't it matter?
01:49:55 <b_jonas> shachaf: because where there's no yield, you don't have to save or load the values to the state struct.
01:50:00 <imode> int-e: help me out here, what's the problem? can you show me a combinatory term that's not able to be translated (using this schema) to wang tiles?
01:50:14 <int-e> imode: yes, you can write down any finite reduction in that fashion. but it doesn't generalize to infinite reductions, and the system is not sound (it allows you to write down non-reductions that have wrong results, like the a = b chain above)
01:51:12 <b_jonas> shachaf: yes, I admit that in some cases it's not strictly tree-like, and you will definitely get such a case when eg. you have a server that's handling a hundred requests asynchroniously and each can take different amount of waiting to another party.
01:51:35 <imode> one moment, gotta walk the dog. highlight me and I'll read back.
01:52:02 <int-e> same here, sans dog
01:55:29 <b_jonas> int-e: well, technically you may need a more complicated a=b chain than the one above, because if you don't have S in your expression, then you can always just restrict the rules to do evaluation steps only when the argument is an atom. you need something with parenthesis required in it.
01:55:44 <b_jonas> but yeah, that if a is a parenthisized expression.
01:56:22 <b_jonas> you do have parenthesis, where you write I b = K (I a) b = K I a b = K a b
01:56:29 <b_jonas> it's the middle step there that's wrong
01:57:01 <b_jonas> so yeah, I guess you might not need parenthesis
01:59:34 <b_jonas> I guess you might not need S
02:06:12 <b_jonas> imode: ignore what I said while I was gone, that's nonsense. listen to int-e, he's got it right.
02:06:32 <imode> so, here's my reasoning: if you look on page 9 of that paper, the list of "Subterms", they've taken a particular combinatory term and knocked it down to a particular set of objects which they map using a schema to a particular set of tiles, and the tiling of those tiles produces the result. even if you had something like the Y combinator written down, when applied to a particular combinator term, _you would
02:06:32 <imode> still_ be able to generate a finite amount of tiles, but your tiling would be infinite.
02:08:25 <imode> I agree that the paper is somewhat underspecified but I don't see the damning evidence that it's all trash.
02:09:46 <imode> I'm also not too keen on insulting people's work after a quick glance at it, but that's just my approach.
02:10:29 <imode> still trying to find that alternate version of said paper.
02:19:45 -!- nchambers has joined.
02:19:59 -!- nchambers has changed nick to uplime.
02:22:48 -!- john_metcalf has quit (Ping timeout: 272 seconds).
02:24:43 <imode> http://ceur-ws.org/Vol-1269/paper34.pdf found this one. it details the same stuff but also includes an implementation of a general CL interpreter within wang tiles by way of a turing machine encoding, similar to your's, int-e.
02:25:47 <imode> so to me it details two approaches: compiling CL terms to their corresponding tilesets, and interpreting an encoding of CL with a single tileset.
02:26:33 <imode> the latter is obvious. the former not so much, but I don't see why it isn't a possible thing. perhaps there even exists an implementation to go along with the paper. I'll try to track one down.
02:43:49 <b_jonas> Hm, wang tiles correspond quite closely to 1-d nondet cellular automata. I'm not sure if I realized that yet or not. That actually explains why wang tiles are considered powerful, in the sense that they can do hard to predict things with just few different tiles. The tiles correspond to the states of the CA, the CA has that neighborhood where each cell depends on only two cells in the past (not three),
02:44:02 <b_jonas> only the border conditions of Wang tiles are rather strange, very different from what we're used to with CAs.
02:44:43 <b_jonas> I'm not sure if I had realized that before about Wang tiles or not.
02:45:18 <b_jonas> I think I missed it because I learned about Wang tiles and then mostly forgot about it, and only later learned about how even deterministic 1D CA are so powerful.
02:45:57 <imode> yeah. there are multiple possible tilings, and it's actually pretty cool to watch a backtracking solver (starting from a seed tile) try to work out a tile by spiralling out.
02:46:29 <imode> it is possible to make fully deterministic tilings, though, which correspond to fully deterministic 1D CAs.
02:46:40 <imode> the specific literature eludes me at the moment.
02:47:53 <b_jonas> imode: sure, because a nondet CA is strictly more powerful than a deterministic CA
02:48:05 <b_jonas> that's usually the way we define nondeterminisitc variants of things
02:48:33 <imode> that's not at all true.
02:49:06 <imode> the fact that a nondeterministic TM can be simulated within a deterministic TM and vice versa.
02:49:11 <imode> the same goes for CAs.
02:49:27 <b_jonas> that's not what I'm saying
02:49:28 <imode> nondeterministic TMs are not more powerful than deterministic TMs.
02:49:39 <b_jonas> I'm only saying one dimension
02:50:03 <b_jonas> that the deterministic thingy can be translated to a nondeterministic thingy easilyi
02:50:46 <imode> nondeterministic TMs can be pretty easily converted to deterministic TMs.
02:51:01 <b_jonas> imode: no, that's backwards
02:51:52 <imode> how is that backwards.
02:53:57 <b_jonas> imode: um, there's probably some terminology problem here
02:54:04 <imode> nondeterministic TMs can be converted to deterministic TMs via dovetailing, which isn't that hard of a thing to do. they can be seen simply as a class of machines that utilize the dovetailing technique.
02:54:38 <imode> I can sort of see what you mean if you say that nondeterministic TMs can be seen as taking deterministic TMs as a subset.
02:54:51 <imode> but I can invert that and say that nondeterministic TMs are really just a specific subclass of TMs.
02:55:02 <imode> s/subclass of TMs./subclass of deterministic TMs.
02:56:12 <b_jonas> imode: um yes, but the dovetailing is still much harder in general than converting the other way, from a determinisitc TM to a nondeterministic TM, for which you don't need dovetailing, only some trivial transformation
02:56:48 <b_jonas> the specific transformation of course depends on how exactly you define the TM, but then so does the other direction
02:56:58 <b_jonas> (how you define TM, and how you define nondeterministic TM)
02:57:00 <imode> like I said, I can invert the relation and say that nondeterministic TMs are just classes of deterministic TMs in which you want to phrase parallel computations.
02:57:08 <imode> or pseudorandom choice.
02:57:16 <imode> between multiple branches.
02:57:39 <b_jonas> it's more 1-D cellular automata than turing-machines that matter here, and the "dovetailing" part actually gets much easier there, mind you
02:57:48 <b_jonas> because the CAs are naturally parallel
02:58:21 <b_jonas> the hard part is just making sure that the memory of each simluated copy grows fast enough, which is what remains of the dovetailing
02:58:21 <imode> I'm not sure they are, but they _do_ lend themselves pretty naturally to parallel evaluation methods.
02:58:58 <imode> like I get what you're saying: nondeterministic TMs are harder to phrase as deterministic TMs because of the extra overhead.
02:59:16 <imode> but I can make the same argument that simulating a game of chess is hard on a deterministic TM rather than a gameboard with a couple of pieces and some parallel rewrite rules.
02:59:40 <imode> it's not a fault of the concept of deterministic TMs, but it's just a class of machines.
03:01:26 <imode> your choice of model defines what kinds of computations are easy and what kinds are hard.
03:05:01 <b_jonas> imode: wait, are you saying that the paper that translates the combinatorial logic evaluation to Wang tiles uses such a transformation that can make the Wang tile pattern infinite in some significant way (not periodic, nor even some more complicated pattern that's easy to describe in a finite way), even if the evaluation tree of the combinatorial logic is finite?
03:05:51 <imode> not sure what'cha mean, could you rephrase that? having a hard time parsing stuff.
03:07:13 <b_jonas> imode: um, if you have a set of Wang tiles, there are at least two questions you can ask. (1) one question is whether they can tile a rectangle with adjacent edges identified (eg. a flat torus), which is more or less the same as asking whether it tiles the plane in a periodic way;
03:07:31 <b_jonas> (2) the other question is whether the tiles tile the plane, not necessarily in a periodic way.
03:09:49 <imode> ah. what I was really just saying is that, using their scheme, it's possible to generate a set of Wang tiles that correspond to terms and subterms (and applications of them) within a certain system of combinators. so small applications of combinators generate other small applications of combinators. these are always finite: one particular string of combinator applications generates another string of
03:09:49 <imode> combinator applications, and each string can be taken as a single object or multiple objects being applied to eachother.
03:09:51 <b_jonas> these two questions are essentially different: the first one is computable with the wang tiles as input, the second one is not.
03:10:13 <b_jonas> this is magic that depends on the periodic bounding combinations. whether the wang tiles tile a rectangle with all zero edges isn't computable.
03:10:19 <b_jonas> if I understand correctly.
03:10:33 <b_jonas> so I guess maybe there are three different questions, not two.
03:10:34 <imode> whether there exists tilings at all for certain tilesets is undecidable, yeah.
03:11:21 <imode> so, the key here is this: we can represent subterms like (SKK) as single terms. any particular combinatory term is guaranteed to generate some particular set of subterms, and we can consider this set finite and use it to build up other terms.
03:11:47 <b_jonas> so my question is essentially, if the combinatorial logic expression has a finite evaluation, then what kind of tiling do you get from that, if you translate it using what the article does? a tiling of the torus, of the plane, of a rectangle with zero borders, or something else?
03:11:57 <imode> for any combinatory logic formula, we can generate a set of wang tiles from it such that the tiling of those Wang tiles matches up with the reduction of _that single particular combinatory logic formula_.
03:12:18 <imode> well, that's the thing. the tiling varies based on the expression you wanna evaluate.
03:12:22 <b_jonas> and of course whether this transformation is one for which you don't need to compute the entire evaluation tree in advance.
03:12:28 <imode> because the tiles change from expression to expression.
03:12:38 <imode> (SKK)S has a different set of tiles, for example, than (SKK)K.
03:13:13 <imode> but the set of tiles is finite. and even if you include something like the Y combinator, or expressions that don't reduce to normal form, _you still have a finite set of tiles_.
03:13:48 <b_jonas> imode: wait, so it's a "cheating" transformation that does the turing-complete step of evaluating the term in advance?
03:14:18 <imode> not exactly, no. unless you consider a C compiler reducing things down to assembly a cheating transformation.
03:14:35 <imode> that's essentially their method.
03:15:10 <imode> no reductions are being performed beforehand.
03:15:21 <b_jonas> as in, do they basically take the infinite tileset of all possible combinators, or perhaps all possible combinators with a normal form or some such, and then if there's a finite evaluation, then obviously there's only finitely many combinators that appear in that evaluation, but you have to do the full evaluation first to determine that set.
03:15:56 <imode> you don't have to do the full evaluation to determine the set from what I can see.
03:16:06 <imode> like, look at page 9.
03:17:00 <imode> sorry, let me relink you.
03:17:02 <b_jonas> you mentioned at least two
03:17:20 <imode> http://ceur-ws.org/Vol-1032/paper-01.pdf
03:17:24 <b_jonas> though I'll have to go to bed very soon, but we'll have the logs
03:17:43 <imode> so, they broke down a particular combinatory term, (((K(S(KK)(S(KK)I))a)b)c)d, into subterms.
03:17:56 <imode> much like you'd break down a particular textual term into a graph of subterms, right?
03:18:48 <imode> T6, for example, is KK, and there are two places in that formula where KK is located.
03:20:14 <imode> you can then show, by way of the schema on page 6 (which I am still studying on how to use properly), that each subterm corresponds to a set of introductions, folds, unfolds, connections (which propagate terms forward) and terminals (which represent the end of a computation).
03:21:21 <b_jonas> imode: but that's one particular term. isn't this something that should apply to any combinatorial logic expression?
03:21:49 <imode> correct, but you need to generate a _new tileset_ for every expression you want to evaluate.
03:22:39 <b_jonas> page 9 seems uninformative, most of it is a huge figure, but let me look at before that too
03:22:50 <imode> the _second_ version of the paper I linked shows a _general_ CL interpreter, which employs the use of a turing machine.
03:22:58 <imode> which can work for any CL expression.
03:23:58 <imode> http://ceur-ws.org/Vol-1269/paper34.pdf the second version.
03:24:31 <imode> so they effectively describe two approaches: one in which you compile a particular CL formula down to a set of wang tiles, and then use the tiling to compute the result of the reduction of that formula...
03:25:06 <imode> and the other in which you have to start with an interpreter of CL within Wang tiles encoded as a turing machine evaluating CL terms.
03:25:39 <b_jonas> from pages 5 and 6, I think the error in the paper (or one significant error, at least) is exactly what int-e said
03:26:08 <int-e> yeah and it doesn't work because the reduction isn't prepared to deal with arguments that are longer than a single character.
03:26:17 <int-e> ("it" being the Turing Machine)
03:26:51 <imode> not sure where you're getting that, int-e.
03:26:55 <b_jonas> int-e: either that, or it is prepared to deal with it by creating new tiles for each reduction of combinatorial terms that ever occurs in the evaluation
03:26:57 <int-e> These people don't know what they're doing at the theoretical end of their work. They *may* know something about DNA computing and Wang tiles though I'm skeptical about that.
03:27:14 <imode> y'know, why don't you try backing up your statements instead of needlessly bashing people.
03:27:25 <imode> if they're wrong, diagram it in clear wording.
03:27:38 <int-e> imode: I'm looking at page 6 and there's no bracket handling in the reduction for K, I or S, except that S *inserts* brackets somewhere.
03:27:54 -!- uplime has quit (Ping timeout: 250 seconds).
03:28:15 <imode> they don't require bracket handling because they're not reducing a textual form of a CL expression.
03:28:31 <b_jonas> imode: I'm suspicious about the DNA computing too. that's like the people who claim that soap bubbles or protein folding can efficiently solve NP problems, because *in practice* the possible results of protein folding or soap bubbles can be guessed well from a minimization problem
03:28:36 <imode> they've extracted common subterms (like you would using graph reduction).
03:28:50 <int-e> imode: All you have to do is try running the TM on K(KI)K and see that it gets stuuck in state TK.
03:29:05 <imode> int-e: can you provide an example tileset which shows that?
03:29:21 <imode> they're not encoding a TM, by the by.
03:29:31 <imode> well, in the first paper, and in the first section of the second.
03:29:35 <b_jonas> heck, int-e is probably right, those tiles do seem to explicitly assume that the application is associative, even if each of those colors can represent complicated terms
03:30:22 <int-e> imode: (\f. f f) (\f x p. p x (f f (S x))) S will require an infinite tile set. have fun doing the abstraction elimination...
03:30:25 <imode> b_jonas: DNA computing in this regard builds Wang tiles out of DNA molecules.
03:30:34 <int-e> I'm DONE. Good night.
03:30:37 <imode> and has actually been demonstrated. not a fan of the approach.
03:30:49 <imode> int-e: gooodnight. you've provided nothing.
03:31:25 <b_jonas> imode: right, but the real DNA won't actually compute full non-deterministic problems, just like how the reil proteins won't magically fold into the optimal energy arrangements
03:31:49 -!- b_jonas has quit (Quit: leaving).
03:32:34 <imode> lots of hostility with a lack of evidence. for a place where unconventional computing methods are accepted, there's quite a bit of hostility.
03:42:47 -!- Lord_of_Life has quit (Ping timeout: 240 seconds).
03:45:04 -!- Lord_of_Life has joined.
03:46:23 <oerjan> . o O ( must resist temptation to look at the pdf to find out if imode or int-e is right )
03:50:55 -!- FreeFull has quit.
03:57:10 <oerjan> imode: just from viewing the discussion here, i suspect int-e is at the point where he considers the error so obvious that it feels like an insult to expect him to write out the counterexample in detail.
03:57:53 <oerjan> as in, he has a general idea why it _cannot_ work but it's a pain to write out.
03:58:08 <imode> then there's nothing more to discuss, as noted by him.
03:59:09 <int-e> To add insult to the injury: if you restrict the redexes of SKI calculus to a finite set then you get a ground term rewrite system, for which reachability is decidable; hence it's no longer Turing complete.
04:00:12 -!- uplime has joined.
04:00:15 <imode> if you're not going to go into any further detail I'm pretty much done as well.
04:00:27 <imode> no need to watch people trash other people's work.
04:00:58 -!- danieljabailey has joined.
04:01:19 <int-e> But yes, I'm totally unwilling to work out the counterexample in any detail because Wang tiles are not the right level of abstraction to work on. I've given the high-level reason further above: Since the full redex is encoded in a Wang tile, and there are reductions for which there are infinitely many distinct redexes, a finite set cannot be enough.
04:01:52 <imode> a full redex _is not_ encoded in a wang tile. complexes and subredexes are.
04:02:20 <int-e> You're wrong. ("redex" refers to the subterm that is headed by the K or S being contracted)
04:02:35 <imode> "You're wrong." "That's trash."
04:02:48 <imode> I mean, are you going to go into any kind of detail whatsoever.
04:03:15 <int-e> Yeah. I'm angry at this point, because you have shown no evidence of actually doing any fucking work yourself.
04:03:23 -!- int-e has left.
04:03:49 <imode> how someone can get so emotional over something so irrelevant is beyond me. hope he's alright.
04:04:21 <imode> like, if I'm wrong, perfect, I didn't write the damn paper.
04:05:08 <imode> hell I'm even trying to work out a tileset for his counterexample, which seems like a reasonable approach.
04:05:19 <imode> hope he feels better when he comes back.
04:06:40 <oerjan> if he's right then evaluation should get stuck when one of those newly constructed redexes (that aren't represented in the tiles) has to be reduced.
04:08:24 <imode> I agree with him. I'm trying to follow the methods they used for constructing factorial near the bottom of the paper.
04:10:18 <imode> they give a series of tuples where each tuple corresponds to a wang tile. each side of a tile corresponds to a subterm, and from what I saw, subterms can be shared. I'm not sure how K(KI)K leads to a redex that can't be represented (or hasn't been generated) as a tile.
04:10:23 <imode> or as a series of tiles.
04:11:37 <oerjan> the K(KI)K example was meant for the TM construction, i suspect
04:11:50 <imode> ah. I don't really care about that bit.
04:12:14 <imode> it's not really interesting. the compilation bit, now _that's_ interesting.
04:13:02 <imode> I'll have to draw out the factorial tiles tomorrow.
04:13:34 <imode> they introduce extra convenience combinators related to pairs and naturals, but give no tiles that correspond to them.
04:14:27 <imode> ahhh, nevermind, those are schemes, not actual tilesets.
04:14:46 <imode> meaning you stick whatever numbers you want in the subscripts and out pop your tiles. _that_ makes sense.
04:14:58 <oerjan> i think the (\f. f f) (\f x p. p x (f f (S x))) S example was for the compilation thing, although i suspect p is unnecessary here...
04:15:16 <oerjan> so (\f. f f) (\f x. x (f f (S x))) S
04:15:48 <imode> is the gist just "generate an infinite series of applications like S(S(S..."?
04:16:16 <imode> or am I missing something.
04:16:44 <imode> because from what I can see, there's no issue in doing something like that.
04:16:54 <oerjan> well also to do it in such a way that the arguments passed to functions have to grow.
04:17:24 <imode> there's nothing in here that says that you have to represent things like that as a single tile: in fact, that's not the case at all.
04:17:29 <oerjan> so that they cannot be something that is already encoded as a single tile
04:18:08 <imode> correct: the structure of the terms is incidental, based on how the terms pair. you're not shoving entire trees into single tiles, otherwise you _would_ need infinite tiles.
04:18:28 <imode> but you _can_ extract commonly found subtrees and use them to form larger applications.
04:18:56 <imode> which if I'm drawing this correctly, the addition example does.
04:19:19 <oerjan> imode: ok, but can it reduce something like S x y z, where x,y and z are _not_ subterms of the original expression?
04:19:45 <imode> yes, because you can construct the subterms that _aren't_ in the original expression _from_ subterms in the original expression.
04:20:03 <imode> at least from what I can see.
04:20:58 <imode> like their tiles and reductions reflect subterm sharing.
04:21:04 <oerjan> imode: you can construct them, but does reduction work for them?
04:21:16 <imode> yeah, looks like it. otherwise naturals would break down almost instantly.
04:22:18 <imode> http://ceur-ws.org/Vol-1032/paper-01.pdf trying to draw out and navigate page 13 of this PDF.
04:23:22 <imode> the footnote on that page details that n and m are essentially "pre-selected numbers", so fill in the naturals and generate your tiles from this schema.
05:11:50 <oerjan> imode: ok i browsed through some of the pdf. assuming you're not meant to "evaluate ahead" to find tile colors, as far as i can see, there is no way to reduce S x y z unless at least _one_ of S x y, z, or S x y z is a tile color. Constructing an SKI expression that eventually reduces S x y z where none of x, y or z occur in the original expression is left as an exercise.
05:17:11 <oerjan> or more generally, you cannot ever reduce an application if it is not equivalent to an original tile color or the application of a pair of tile colors.
05:17:54 <imode> that sounds about right to me.
05:17:57 <oerjan> which means you cannot, say, construct an infinite list of numbers.
05:18:39 <oerjan> (say, lazy list of factorials)
05:18:51 <imode> I'll have to try that.
05:21:12 <oerjan> that monoid thing also looks a bit grating - it seems to me that you could easily get into a situation where evaluation confuses a b (c d) with a (b c) d
05:21:39 <imode> I tried diving down the rabbit hole with that.
05:24:10 <oerjan> although i'm a little unsure whether the "sound computation grid" property is supposed to help with that
05:26:26 <oerjan> hm actually it _does_ seem to be intended to do that
05:27:48 <oerjan> essentially it enforces a kind of parenthesization by dividing up the grid into non-interacting parts.
05:29:56 <oerjan> (how they expect DNA to enforce that property i dunno ;P)
05:31:07 <imode> the DNA stuff is actually plausible: you can form small DNA tiles by forming two DNA strands in such a way that the base pairs encode the colors of the edges, and only bind to base pairs.
05:31:16 <imode> that ironically is the most plausible point of this, because it's been done.
05:33:03 <oerjan> yes, i mean the sub-grid property, not the individual tiles
05:33:17 <imode> sorry, reading comprehension failed me.
06:59:59 -!- oerjan has quit (Quit: Nite).
07:13:26 <zzo38> Now in this GURPS I can tell who is the shapeshifter because they have "X" on their forehead. Next, I have to stop them from writing "X" on everyone else's forehead too!
08:26:48 -!- tromp has joined.
10:02:03 -!- AnotherTest has joined.
10:06:34 -!- AnotherTest has quit (Ping timeout: 250 seconds).
10:16:56 <imode> after some careful consideration that paper was full of shit.
10:17:21 <imode> I tried working some of the tilings out and yeah, the people above me were right.
10:33:08 -!- int-e has joined.
10:33:47 -!- tromp has quit (Remote host closed the connection).
10:33:59 -!- arseniiv has joined.
10:40:30 -!- tromp has joined.
10:58:39 <int-e> imode: Three remarks about why I might core than I maybe should. a) this is essentially in my current area of research (term rewriting, tree automata (which feature in decidability results for ground term rewrite systems; http://tata.gforge.inria.fr/ may be the best available source on that topic)) b) I hate to see people wasting their time with wrong claims c) it becomes frustrating when I fail...
10:58:45 <int-e> ...to convey that information efficiently, ending up wasting even more time rather than helping.
10:58:59 -!- AnotherTest has joined.
10:59:42 <imode> int-e: it's fine, you're good! I concluded independently that it's not a good paper.
11:00:25 <imode> there's some woeful underspecification or lack of understanding on their part.
11:02:58 -!- derpy_ has quit (Ping timeout: 245 seconds).
11:03:26 -!- derpy has joined.
11:03:42 -!- AnotherTest has quit (Ping timeout: 252 seconds).
11:16:20 -!- imode has quit (Ping timeout: 250 seconds).
11:49:22 -!- arseniiv has quit (Ping timeout: 246 seconds).
12:42:39 -!- b_jonas has joined.
12:43:19 <b_jonas> int-e: it doesn't get stuck on (KI). there are tiles to open the parenthesis. anywhere, even in a left term. they really seem to believe that the application is associative.
12:44:11 <b_jonas> int-e: also, not only do they add a color for every combinator calculus term, but also a tile for every combinator calculus reduction. not just one step reduction, but any reduction. so you can do any finite computation in one tile.
12:44:52 <b_jonas> that latter part is a fixable bug, but I think it illustrates how the authors don't understand what they're talking about.
12:44:59 <int-e> b_jonas: as oerjan suggested, that was for the Turing Machine in the second paper.
12:45:58 <b_jonas> int-e: that may make a bit more sense, because a TM can actually be simluated as a 1-D CA in a more straightforward way than the, uh, monoid of combinator calculus
12:46:24 <b_jonas> the "monoid" thing confused me because they don't explicitly state what operation it's a monoid over, but it does become clear by page 6
12:46:56 <int-e> ("that" referring to the K(KI)<some combinator>)
12:48:32 <b_jonas> int-e: and yes, I agree that you can't work out a proper counterexample, because the paper is vague enough without precise definitions and proofs that you can't point out more specifically than that where the error is
12:50:43 <b_jonas> int-e: the part where they have single-tile reduction for every calculation is great, because if they actually implemented that, it would let them get the right result in a randomized imlementation that looks for random small tilings, as opposed to a true nondeterministic implementation, so the DNA computation would run trivially with just one transformation from the input to the result and a lot of
12:52:48 <b_jonas> wtf, now even oerjan looked at it? why?
12:52:57 <b_jonas> (still looking at the logs)
12:53:58 <HackEso> CDOP is OCPD, except with the letters in the *proper* order.
12:55:25 <int-e> Put differently, he probably reached the point where he had to find out what all the ruckus was about.
12:56:29 <b_jonas> int-e: also, I verified that for both of those disconnected polyminos, there's essentially one tiling that is invariant to translation (0,8) and translation (8,0), and I know the tilings I had back when I first tried this were invariant to those. that imples that the tilings you show on grid6.png are the ones I found.
12:57:32 <b_jonas> int-e: these polyminos are significant because I think, but I'm not sure, that they're the only polyminos of 4 squares that can't tile the plane with just translations and horizontal and vertical mirrors and 180 deg rotations, you really need a 90 deg rotation or diagonal mirror
12:57:41 <b_jonas> there was an article on that, let me check
13:20:51 -!- arseniiv has joined.
13:58:44 -!- arseniiv has quit (Ping timeout: 246 seconds).
14:08:51 -!- arseniiv has joined.
15:06:36 -!- uplime has quit (Ping timeout: 272 seconds).
15:13:17 -!- Essadon has joined.
15:14:01 -!- Essadon has quit (Max SendQ exceeded).
15:14:27 -!- Essadon has joined.
15:15:47 -!- b_jonas has quit (Quit: leaving).
15:18:28 -!- AnotherTest has joined.
15:23:04 -!- AnotherTest has quit (Ping timeout: 268 seconds).
15:35:36 -!- AnotherTest has joined.
15:42:30 -!- Lord_of_Life_ has joined.
15:45:00 -!- Lord_of_Life has quit (Ping timeout: 250 seconds).
15:45:04 -!- Lord_of_Life_ has changed nick to Lord_of_Life.
15:49:20 -!- AnotherTest has quit (Ping timeout: 246 seconds).
16:14:19 -!- AnotherTest has joined.
16:24:41 -!- AnotherTest has quit (Ping timeout: 246 seconds).
16:37:27 -!- tromp has quit (Ping timeout: 240 seconds).
17:14:18 -!- Sgeo_ has quit (Read error: Connection reset by peer).
17:14:44 -!- Sgeo_ has joined.
17:39:37 -!- AnotherTest has joined.
18:00:02 -!- S_Gautam has joined.
18:04:28 <HackEso> olist 1150: shachaf oerjan Sgeo FireFly boily nortti b_jonas
18:13:11 -!- oerjan has joined.
18:33:08 -!- AnotherTest has quit (Ping timeout: 250 seconds).
18:33:43 -!- b_jonas has joined.
18:36:00 <b_jonas> private fireworks are going strong
18:37:38 -!- tromp has joined.
18:38:13 -!- FreeFull has joined.
18:43:01 <lambdabot> Local time for shachaf is Mon Dec 31 10:42:59 2018
18:44:47 <oerjan> hm xkcd's consensus new year was about a quarter hour ago
18:45:04 <oerjan> so happy new year, i guess!
18:57:33 <b_jonas> it's not yet new year, but happy new year to you as well
18:57:34 <oerjan> b_jonas: that crap paper actually has a kind of solution to the monoid confusion in the "sound computation grid" section. however, the fact that every reduction result needs to be represented on a tile clearly blows it for any expression that constructs infinitely something like the lazy list of factorials i suggested.
19:01:01 <oerjan> and yeah, i had to see if the problem with the paper was as obvious as int-e implied.
19:01:13 <oerjan> (plus, i didn't like the animosity)
19:02:46 -!- imode has joined.
19:03:08 -!- AnotherTest has joined.
19:03:23 <imode> happy new year's eve.
19:04:04 <b_jonas> new year starts in less than four hours by the way
19:15:40 <imode> is freenode doing that newyears thing again?
19:16:29 <b_jonas> imode: I'm not sure what counts as a "that newyears thing" specifically. there's a #freenode-newyears linked from the topic of #freenode , but then, irc channels are cheap once you have a network up
19:16:43 <b_jonas> imode: do you mean sending wallops?
19:16:49 <b_jonas> or global notices or whatever?
19:17:12 <imode> the channel was what I was referring to.
19:20:56 -!- arseniiv_ has joined.
19:24:13 -!- arseniiv has quit (Ping timeout: 245 seconds).
19:30:22 <oerjan> it's a common human state hth
19:30:50 <b_jonas> in NES Super Mario Bros, how many fireballs does it take to kill Bowser (without jumping on him)? I thought it took six, but https://www.youtube.com/watch?v=xJcAPGf0Z6A says five.
19:32:42 <b_jonas> I may be mixing it up with GB Super Mario Land, in that one it takes six fireballs to kill most of the bosses (the castle bosses are immune, and the witch is possible to kill but very hard because she despawns immediately after a hit)
19:34:47 -!- AnotherTest has quit (Ping timeout: 240 seconds).
20:10:52 -!- uplime has joined.
20:14:06 -!- AnotherTest has joined.
20:14:25 <oerjan> in girl genius, maxim's nose seems to be growing lately.
20:22:24 <int-e> Hmm private fireworks are illegal in the city this year... still hearing stuff though. People don't know or don't care.
20:22:42 -!- AnotherTest has quit (Ping timeout: 246 seconds).
20:23:20 <b_jonas> int-e: they are legal here with strong restrictions, but most of how people use them are illegal and sometimes dangerous
20:23:50 <int-e> But I guess it's quieter than last year.
20:26:43 <oerjan> i think they're allowed in the suburbs here but not the city center
20:30:30 <int-e> it's per muncipality, and I guess it's allowed outside of cities where it's not a fire hazard.
20:33:16 <oerjan> forbidden inside the red stippled line, allowed 18:00 - 2:00 outside https://www.tbrt.no/images/artikkelbilder/ForbudssoneforfyrverkeriiTrondheim.jpg
20:34:06 * oerjan isn't even on that map, but a bit further east
20:35:12 <oerjan> (there are municipal fireworks inside the zone though)
20:36:53 <int-e> b_jonas: hah. "experts warn of illegal firecrackers"
20:37:46 <int-e> I'm used to that warning. It hasn't changed in the past 30 years.
20:38:45 <int-e> (and personally I don't even like the legal ones)
20:39:48 <arseniiv_> oerjan> hm xkcd's consensus new year was about a quarter hour ago => wow, pretty close to mine! (UTC+5)
20:41:26 -!- arseniiv_ has changed nick to arseniiv.
20:41:28 <int-e> hmm, average time zones by person? so india and china have a huge influence?
20:41:46 <int-e> (I had not seen the xkcd yet)
20:42:10 <int-e> right, median makes more sense actually
20:46:29 -!- tromp has quit (Remote host closed the connection).
20:49:17 <arseniiv> ah, not that close, it’s 6:30 UTC, not what I thought, so not that significant. Or contrary, more significant, as it’s further from the median and so rarer(?)
20:50:53 <b_jonas> average would be a bad idea, it may be skewed by the few people who live in mediaeval theme parks that are in timezones several centuries before most of the earth
20:51:03 <b_jonas> ok, maybe not that skewed, but still
20:55:04 <zzo38> The red line doesn't seems to enclose anything.
21:02:25 -!- 77FABDJ8V has joined.
21:09:40 <oerjan> zzo38: the fjord is not very flammable and so considered obviously not included hth
21:10:14 <oerjan> i suppose you could ask how far from shore a boat must be
21:15:03 <zzo38> Yes, so they should make the line enclosed so that you can know.
21:15:29 -!- 77FABDJ8V has quit (Remote host closed the connection).
21:34:12 <fizzie> "You must not set off fireworks between 11pm and 7am, except for: -- New Year’s Eve -- when the cut off is 1am"
21:35:16 <fizzie> There were a few pops when there was still daylight, which is a bit odd.
21:37:01 <b_jonas> fizzie: sure. a lot of the illegal explosives go for the sound and blowing off stupid people's hands, not for the sight.
21:37:26 <b_jonas> it's traditional for a few people to blow their own hands off every New Year
21:38:41 <b_jonas> people are discouraged from that both because the kinds of explosives used for that are illegal and if you blow your hand off you're almost certainly discovered, and because spending New Year in the hospital is unpleasant, but people still do it
21:39:15 <fizzie> There's far more fireworks here on Bonfire Night (aka Guy Fawkes Night) than on New Year's.
21:47:01 <b_jonas> Anyone in the +0200 timezone (Finland or Romania)? Should we do a small countdown then?
21:50:25 -!- tromp has joined.
21:51:39 <b_jonas> Because if not, then I'll reserve the celebration for the +0100 and leave some for the +0000, -0500, -0800 ones.
21:51:56 <b_jonas> Anyway, I'll prepare the New Year sausages.
21:54:52 -!- tromp has quit (Ping timeout: 246 seconds).
22:00:11 <b_jonas> For anyone with +0100 timzone offset, happy New Year!
22:02:44 <oerjan> . o O ( i think you're off by one again )
22:09:42 -!- S_Gautam has quit (Quit: Connection closed for inactivity).
22:13:42 -!- uplime has quit (Ping timeout: 250 seconds).
22:18:48 -!- tromp has joined.
22:57:47 -!- tromp has quit (Remote host closed the connection).
22:59:37 <b_jonas> And now the New Year is coming up in our timezone offset, the actual +0100 (France, Germany, Poland, Norway, Hungary)
22:59:50 <b_jonas> And now the New Year is coming up in our timezone offset very soon, the actual +0100 (France, Germany, Poland, Norway, Hungary)
23:02:17 * int-e listens to money going up in smoke.
23:10:32 -!- sprocklem has joined.
23:17:04 -!- uplime has joined.
23:30:21 -!- tromp has joined.
23:34:37 -!- tromp has quit (Ping timeout: 246 seconds).
23:51:33 <b_jonas> In the +0000 timezone offset (UK), New Year will start in about 9 minutes.
23:52:09 <fizzie> Not much action on the fireworks front.
23:56:09 <fizzie> There was more half an hour ago. Maybe everyone's watching the official ones.
23:56:23 <fizzie> (Sadly our windows are to the wrong direction.)
23:58:49 <b_jonas> fizzie: use the power of the internet to watch the official ones!
23:59:23 -!- tromp has joined.