←2023-10-19 2023-10-20 2023-10-21→ ↑2023 ↑all
00:23:30 -!- we11en has joined.
00:28:21 -!- we11en has quit (Ping timeout: 255 seconds).
00:29:13 -!- we11en has joined.
00:58:23 -!- Lord_of_Life_ has joined.
00:59:02 -!- Lord_of_Life has quit (Ping timeout: 255 seconds).
00:59:46 -!- Lord_of_Life_ has changed nick to Lord_of_Life.
04:48:45 -!- we11en has quit (Quit: Lost terminal).
05:55:48 -!- we11en has joined.
06:15:29 -!- we11en has quit (Quit: Lost terminal).
06:18:12 -!- wpa has joined.
06:26:31 -!- chiselfuse has quit (Read error: Connection reset by peer).
06:26:50 -!- chiselfuse has joined.
06:54:41 <esolangs> [[Language list]] M https://esolangs.org/w/index.php?diff=118122&oldid=118007 * Peter01 * (-20) /* H */ Please, stop.
07:37:32 -!- craigo has joined.
08:20:41 -!- arseniiv has joined.
08:22:50 -!- __monty__ has joined.
08:44:33 -!- Koen has joined.
09:41:13 <esolangs> [[User:Cinnamony]] M https://esolangs.org/w/index.php?diff=118123&oldid=118110 * None1 * (-11) /* Picture */
09:55:09 -!- Sgeo has quit (Read error: Connection reset by peer).
10:04:09 <int-e> b_jonas: you might enjoy this: https://int-e.eu/~bf3/tmp/shapez-cursed-white.png
10:17:14 <esolangs> [[CFR]] M https://esolangs.org/w/index.php?diff=118124&oldid=118116 * None1 * (+23)
10:28:26 <esolangs> [[3 Bits, 3 Bytes]] N https://esolangs.org/w/index.php?oldid=118125 * None1 * (+1082) Create language
10:28:49 <esolangs> [[3 Bits, 3 Bytes]] M https://esolangs.org/w/index.php?diff=118126&oldid=118125 * None1 * (+1)
10:35:04 <esolangs> [[3 Bits, 3 Bytes]] https://esolangs.org/w/index.php?diff=118127&oldid=118126 * None1 * (+221)
10:35:37 <esolangs> [[3 Bits, 3 Bytes]] M https://esolangs.org/w/index.php?diff=118128&oldid=118127 * None1 * (+0)
10:36:19 <esolangs> [[3 Bits, 3 Bytes]] https://esolangs.org/w/index.php?diff=118129&oldid=118128 * None1 * (+51)
10:37:13 <esolangs> [[Joke language list]] https://esolangs.org/w/index.php?diff=118130&oldid=118095 * None1 * (+61) /* General languages */
10:37:24 <esolangs> [[User:None1]] https://esolangs.org/w/index.php?diff=118131&oldid=118101 * None1 * (+61) /* My Esolangs */
10:37:46 -!- wpa has quit (Quit: Connection closed for inactivity).
10:41:15 <esolangs> [[User:None1]] M https://esolangs.org/w/index.php?diff=118132&oldid=118131 * None1 * (+0) /* My Esolangs */
10:42:07 <esolangs> [[3 Bits, 3 Bytes]] https://esolangs.org/w/index.php?diff=118133&oldid=118129 * None1 * (+34)
11:08:17 -!- wib_jonas has joined.
11:08:42 <wib_jonas> int-e: does that break by getting desynchronized when you save and load?
11:09:14 <wib_jonas> it really is cursed, usually you want white paint in a large amount so it's not worth to multiplex a single painter for it, and this doesn't even save space
11:09:18 <wib_jonas> s/painter/mixer/
11:10:00 <int-e> wib_jonas: It survived my reload test. But it'll still desync whenever you hold it wrong. And yes, the throughput is half of what you'd get with two mixers so it's really just a curiosity.
11:10:32 <int-e> My first uploaded version couldn't even be reset properly, but I fixed that :-)
11:13:35 <wib_jonas> if it's not practical, can you turn it into a puzzle instead?
11:33:17 <int-e> No, because puzzles don't have mergers or balancers. (They *do* have double painters though which sort-of can merge shapes, and people abuse that quite a bit.)
11:37:29 <int-e> I believe that design choice is actually a good one... there's just too much crazy things you can do with balancers and they tend to be extremely tricky to balance.
11:44:18 <wib_jonas> makes sense
13:26:13 -!- cpressey has joined.
14:04:48 -!- ais523 has joined.
14:12:57 <cpressey> hi ais523
14:35:56 -!- wib_jonas has quit (Quit: Client closed).
14:37:45 <ais523> hi cpressey
14:38:04 <ais523> do you logread? I replied to your questions from a couple of days ago in the logs, but you weren't online, and I don't know whether you read them
14:38:22 <ais523> https://logs.esolangs.org/libera-esolangs/2023-10-18.html#lLe
14:40:40 <cpressey> Oh sorry, yes, I saw your response, thank you.  Then I kind of forgot about it...
14:41:18 <ais523> fwiw I have been wondering about fun things like "an entirely branchless lexer+parser combination" (where the only branch is to handle EOF)
14:41:31 <river> :O
14:41:33 <ais523> and the same code does both the lexing and hte parsing
14:41:55 * cpressey posts a squinting-Fry reaction gif
14:42:01 <ais523> this is of course possible in theory because branchless programming is TC, but I'm wondering whether it might be possible to get it more efficient than a traditional parser
14:42:45 <ais523> something like yacc has to have branches everywhere on whether it's read a lookahead token or not – ironically, just reading it unconditionally would probably be more efficient, because you wouldn't have to check whether you'd done it or not
14:43:10 <cpressey> I don't know much about branchless but I can imagine a sort of thing where you turn all the parsing operations into matrix operations or something
14:43:56 <ais523> but, it would change the semantics because you could no longer get the parser to change the way the next token was lexed (only the one after)
14:43:56 -!- ais523 has quit (Remote host closed the connection).
14:44:10 -!- ais523 has joined.
14:45:23 -!- ais523 has quit (Client Quit).
14:45:38 -!- ais523 has joined.
14:46:05 <ais523> one thing that I have realised is that there are only finitely many possible states for the lookahead tokens to be in (for most standard parser types – LL(*) is an exception)
14:46:29 <ais523> so you could in theory combine them with the parser states
14:47:14 <ais523> I am not sure whether this would cause a combinational explosion – it's quite possible that it doesn't, because you wouldn't have to be able to store lookahead tokens that would inevitably cause an error anyway, you can just produce the error on the spot
14:49:45 <cpressey> If you're talking LL(1) you can compute the FIRST set of each production, which is usually small.  Combining this with the set of all the productions would usually lead to a huge product set, I don't think.
14:49:56 <cpressey> *wouldn't
14:51:13 <ais523> right, and LR(1) is similar – canonical LR(1) has a lookahead set for each production, which again is usually small, and holds all the symbols that can possibly correctly appear in lookahead
14:51:51 <ais523> which implies that the various compressed LR(1) formats, like LALR(1), could also handle it
14:55:49 <cpressey> So one thing I've heard is that "recursive descent is better than LR in modern world because the LR lookup tables for realistically sized grammars don't fit in cache lines".  This was a long time ago though, maybe 20 years ago.  I don't know if it was then true or is still true.
14:56:22 <cpressey> I also heard that the Lua team replaced their yacc parser with a hand-crafted RDP and the speedup was significant, something like 2x
14:57:11 <ais523> I think of LR grammars as having a large LR table, but it's compressed for storage and used in the compressed form
14:57:20 <ais523> and think that parsers can be optimised by finding better ways to compress it
14:57:31 <ais523> looking at LR tables generated by yacc and friends, there is a lot of repetition
14:58:19 <ais523> I also think recursive descent parsers use more cache than the typical LR table does, but it's L1c rather than L1d, which may matter – there is normally less pressure on the code cache while parsing than on the data cache
14:59:00 <cpressey> I generally don't work on this level of optimization, anyway; for me "efficient" means "in P instead of in EXPTIME" :)
15:05:55 <cpressey> I'm working on a grammar formalism, but it's not for building parsers; it's a lot like an attribute grammar or DCG, the main improvement over these being that it can generate strings as efficiently as it can parse them.  (again, "efficiently" meaning "avoiding exponential blowup")
15:06:53 -!- Thelie has joined.
15:07:57 <cpressey> You can run a DCG "forwards or backwards" in a relational programming language like Prolog or miniKanren to do this, but if it's efficient in one direction (parsing) it will be inefficient in the other direction (generation).
15:08:18 <cpressey> (I don't have a proof of this but this was my experience from playing with it extensively)
15:11:29 <ais523> cpressey: oh, that's interesting
15:12:07 <ais523> one of the things I was looking at was the possibility of bootstrapping a parser, and an idea I had to do that was to write the parser in Prolog
15:12:15 <ais523> and have it generate its own source code by unparsing itself
15:12:25 <ais523> I didn't actually get it to work beyond a small proof of concept
15:12:40 -!- b_jonas has quit (Ping timeout: 252 seconds).
15:12:45 <ais523> so I'm not sure what the efficiency was like
15:13:34 -!- b_jonas has joined.
15:15:58 <ais523> but yes, I don't normally work on this level of optimization either, but thought it would be a fun (and potentially practically useful) followup to the fizzbuzz
15:32:26 <esolangs> [[Jack Eisenmann]] M https://esolangs.org/w/index.php?diff=118134&oldid=50339 * PythonshellDebugwindow * (+30) Stub, category
15:48:55 -!- Thelie has quit (Ping timeout: 264 seconds).
16:00:34 <cpressey> I'm having a heck of a time getting backtracking right in the generation case.  I probably need to step back.
16:02:07 -!- Thelie has joined.
16:03:40 <cpressey> If I can do it, though, it ought to be neat.  To be able to write a grammar that can solve knapsack problems.  That sort of thing.
16:04:34 <cpressey> I think I'm missing the fact that a loop can have three outcomes: it can terminal successfully, it can fail (and cause backtracking in the enclosing context), or it can repeat (and then you ask this question again and get these three outcomes again)
16:04:41 <cpressey> *terminate
16:04:53 <ais523> maybe it'd be easier to generate in parallel rather than using backtracking?
16:05:06 <ais523> (this is comparable to using a call queue rather than a call stack)
16:06:20 <cpressey> Well, in some ways that would be nicer, yes -- like miniKanren, you won't get stuck in an infinite DFS.
16:06:39 <cpressey> But the flipside is that you can start being a memory hog!
16:09:17 <ais523> yes, it's probably bad from a memory point of view unless you can compress the storage somehow
16:10:43 <cpressey> At some point earlier this year I ended up reading GOFAI textbooks trying to understand if "truth management" could be used to narrow down the search space for that sort of thing.
16:12:11 <cpressey> Which was somewhat interesting, actually, because in the modern world those algorithms no longer look like "artificial intelligence", they just look like search space optimization
16:12:23 -!- wib_jonas has joined.
16:12:33 <wib_jonas> cpressey: "because the LR lookup tables for realistically sized grammars don't fit in cache lines" => yes, and that's why I don't think a branchless parser is such a good idea
16:14:12 <ais523> well, one cache line is 64 bytes, and the L1 data cache as a whole is normally 32 KiB
16:14:24 <ais523> you can't fit a parser into the former, but the latter seems plausible
16:15:27 <ais523> anyway, I think the real problem is handling output from the parser efficiently
16:15:45 <ais523> normally parsers are used to build a tree-structured AST, but that's inherently slow
16:15:55 <cpressey> wib_jonas: what if you could turn it into a numerical matrix problem (insert lots of handwaving here) and run it on the GPU though?
16:16:41 <cpressey> actually, I think someone did something like that for parsing JSON
16:20:43 <cpressey> I was apparently thinking of "simdjson".  But it looks like ppl have tried using GPU to parse CSV and JSON.  But these are very specialized approaches, a general approach feels very much like it still requires a lot of handwaving
16:22:53 <ais523> GPUs don't obviously map well to most parsing algorithms
16:23:08 <ais523> they might be good at the algorithm used for DCGs, that one feels somewhat parallel in spirit
16:24:15 <ais523> it feels weird trying to use a GPU to speed up something that's linear-time anyway, though
16:29:41 <cpressey> Well, the algorithm for DCGs is no different than the rest of Prolog; it's basically syntax sugar for inference rules.  I tried search for GPU-accelerated inference engines and all the results are AI stuff, because that's what "inference engine" means in 2023.
16:34:10 <ais523> I said the wrong thing, I didn't mean DCGs but PCGs
16:34:15 <ais523> err, PEGs
16:40:55 <cpressey> Ah.  Hm, well PEGs backtrack, but they use ordered choice, which doesn't feel very parallel-y to me; I assumed that it was generally implemented linearly with some kind of DFS.  But I haven't looked at their implementation.  ("packrat parsing" is it?)
16:41:37 <ais523> it's basically dynamic programming
16:41:40 <cpressey> My brain wants to say "how is this not dynamic programming / memoization all over again"
16:41:54 <cpressey> Exactly
16:42:14 <ais523> but the reason it's linear time is that there are only finitely many things that could be memoized per token of input
16:42:26 <ais523> so, instead of memoizing on demand, you could possibly calculate them all in parallel
16:43:55 <ais523> GPUs don't require the things they're calculating in parallel to be entirely independent, just to have similar code – the various threads are allowed to communicate with each other, and in some special cases can do so very efficiently
16:46:13 <cpressey> (I think I see my confusion regarding backtracking generation now - but now I don't understand how I thought the backtracking parsing part was working - I might've chosen a bad example to use as a test)
16:57:19 -!- cpressey has quit (Quit: Client closed).
17:09:51 <wib_jonas> hmm, so you're saying that how he GPU parsing would work like this. for each n from 0 to logarithm, you split the document to 2**n long infixes, and parse each of those infixes starting from each parser state. if 0==n then you just look up the transition rules, whereas for 0<n you check the parse of both halves and index the parse of the second
17:09:52 <wib_jonas> half using the final state from the parses of the first half. does that make sense?
17:10:25 <ais523> that isn't what I was planning, but I think it might work?
17:10:45 <wib_jonas> I'm honestly not sure, I don't understand GPUs
17:10:55 <ais523> the problem is that doing it that way seems to imply a finite state machine rather than a PDA
17:11:42 <wib_jonas> I was thinking of a LL parser. you will have to read the lookahead tokens right after the infixes, but that shouldn't be a problem.
17:11:59 -!- Koen has quit (Remote host closed the connection).
17:12:06 <wib_jonas> oh, but you also have to save the stacks in each of the parses, and those will be large for LL or LR
17:12:11 <wib_jonas> so yes, this won't work well
17:13:08 <ais523> the PEG approach is basically to record, for each position in the input and each nonterminal, "could that nonterminal start here, and if so, where does it finish?"
17:13:35 <ais523> (PEGs are limited so that there can't be two different ways to parse the same nonterminal starting from the same position)
17:13:53 <ais523> I am not sure how well that would map to GPUs, it depends a lot on the order in which you scan the various nonterminals
17:14:15 -!- wib_jonas has quit (Quit: Client closed).
17:30:14 -!- Sgeo has joined.
18:40:24 -!- cpressey has joined.
18:43:03 <cpressey> ais523: fwiw, considering he idea of parallelizing PEG parsing reminds me of "chart parsing" algorithms (of which Nearly and CYK are examples) where the dynamic programming is also there to handle the nondeterminism, and can find all parses.
18:43:41 <cpressey> *Earley
18:44:04 <ais523> maybe I'll look into parallel parsing after my current attempt
18:44:42 <ais523> one nice thing about a branchless parser is that you could write it with SIMD instructions and do several unrelated parses simultaneously on the same core – I'm not sure how much throughput that would gain, memory bandwidth might be a bottleneck
18:44:51 <cpressey> Chart parsing is vaguely cool imo, and if it can find all parses, it seems like almost like a waste to apply it to PEG :)
18:45:26 <ais523> I don't like PEG anyway :-)
18:45:53 <ais523> but then, I don't like ambiguous grammars either (at least for parsing purposes; they're more interesting if viewed as an esolang)
18:49:08 <cpressey> Aye, you might remember a while ago I said I prefered parsing tools that were "more solidly theory-based" than yacc; and PEG does not count as "solidly theory-based" for me.
18:50:25 <ais523> I am positive that the correct front-end approach is something that desugars to a true context-free grammar – no yacc-like weird precedence overrides, no PEG-like arbitrary disallowing of certain parses
18:50:46 <ais523> (OK, it technically isn't arbitrary in PEG because their alternation-like operator is well defined, its semantics just aren't what you'd typically want)
18:51:04 <ais523> also, that the correct front-end approach adds sugar for things like precedence, which desugars into a true CFG very simply
19:12:24 -!- cpressey has quit (Quit: Client closed).
19:48:12 -!- __monty__ has quit (Ping timeout: 240 seconds).
19:53:48 -!- arseniiv has quit (Quit: gone too far).
20:03:21 <esolangs> [[User talk:None1]] M https://esolangs.org/w/index.php?diff=118135&oldid=116873 * TheBigH * (+361)
20:04:46 -!- __monty__ has joined.
20:19:58 -!- Koen has joined.
20:42:42 -!- __monty__ has quit (Quit: leaving).
21:29:26 -!- Thelie has quit (Quit: Leaving.).
21:29:31 -!- Thelie1 has joined.
21:29:41 -!- craigo has quit (Ping timeout: 260 seconds).
22:59:35 -!- Koen has quit (Remote host closed the connection).
23:02:32 -!- Koen has joined.
23:24:08 -!- Koen has quit (Quit: Leaving...).
23:38:29 -!- Thelie1 has quit (Ping timeout: 255 seconds).
←2023-10-19 2023-10-20 2023-10-21→ ↑2023 ↑all