←2025-03-25 2025-03-26 2025-03-27→ ↑2025 ↑all
00:39:26 <esolangs> [[Topple]] https://esolangs.org/w/index.php?diff=154533&oldid=154510 * H33T33 * (-42)
00:41:56 <esolangs> [[User:H33T33]] M https://esolangs.org/w/index.php?diff=154534&oldid=152196 * H33T33 * (+12) /* Experience */
00:52:09 <esolangs> [[Special:Log/newusers]] create * Hazel * New user account
00:54:15 <esolangs> [[Esolang:Introduce yourself]] M https://esolangs.org/w/index.php?diff=154535&oldid=154478 * Hazel * (+111) Hello, world -Hazel
00:55:17 <esolangs> [[User:Hazel]] N https://esolangs.org/w/index.php?oldid=154536 * Hazel * (+48) Created page with "== hazel == transfem lesbian, she/her. the end."
00:59:22 <esolangs> [[User:Aadenboy/Zerons]] M https://esolangs.org/w/index.php?diff=154537&oldid=154531 * Aadenboy * (+0) /* Further */ mistake
01:24:18 <esolangs> [[Rivulet]] https://esolangs.org/w/index.php?diff=154538&oldid=154500 * Rottytooth * (+2625) Added more syntax
01:29:13 <esolangs> [[Rivulet]] https://esolangs.org/w/index.php?diff=154539&oldid=154538 * Rottytooth * (+361) Adding images
01:35:18 <esolangs> [[Special:Log/upload]] upload * Rottytooth * uploaded "[[File:Rivulet Fibonacci1.png]]": A Fibonacci program in the Rivulet language
01:36:10 <esolangs> [[Rivulet]] https://esolangs.org/w/index.php?diff=154541&oldid=154539 * Rottytooth * (+0)
01:36:13 <esolangs> [[Goog]] N https://esolangs.org/w/index.php?oldid=154542 * Hazel * (+2271) Add Esolang "Goog"
01:36:51 <esolangs> [[Special:Log/upload]] upload * Rottytooth * uploaded "[[File:Rivulet Fibonacci2.png]]"
01:37:50 <esolangs> [[Special:Log/upload]] upload * Rottytooth * uploaded "[[File:Rivulet Fibonacci4.png]]": Another Fibonacci program in Rivulet
01:42:18 <esolangs> [[UT19]] https://esolangs.org/w/index.php?diff=154545&oldid=122761 * Stkptr * (+327) /* The production rules of UT19 */ Add more typical alphabetical version for clarity
01:44:34 <esolangs> [[Rivulet]] https://esolangs.org/w/index.php?diff=154546&oldid=154541 * Rottytooth * (+151) Added images
01:45:08 <esolangs> [[Rivulet]] https://esolangs.org/w/index.php?diff=154547&oldid=154546 * Rottytooth * (+0) Resized images
01:49:44 <esolangs> [[Language list]] https://esolangs.org/w/index.php?diff=154548&oldid=154523 * Hazel * (+11) /* G */ - Add Goog
03:52:41 <fizzie> The IPv6 saga continues: DigitalOcean has now (politely) told me to go complain to my ISP instead. Well, there's still a "different team" investigating it from their end, but still.
03:53:09 <fizzie> So I filed a support ticket with my ISP, and hit the size limit of their web form, and after clicking submit the report just... disappeared.
03:53:28 <fizzie> So I've sent them an email instead, because at least that doesn't have silly-short size limits.
03:55:07 <fizzie> But I'm pessimistically expecing I have to go through the standard support script of "have you tried restarting your router?" and (once they learn I'm not actually using the one they supplied) the request to switch back to their dinky box, no matter how clear it seems to be that the problem's _between_ the ISP and DigitalOcean networks, not at either end.
03:56:36 <fizzie> And I guess to be fair, it's not entirely impossible that, I don't know, their "Hyperhub" does some special magic in the DHCPv6 prefix delegation request that enables this one particular route to also work.
03:57:16 <int-e> heh that could be the worst outcome
04:00:14 <int-e> . o O ( haunted IPv6 routing )
04:52:56 <esolangs> [[Goog]] https://esolangs.org/w/index.php?diff=154549&oldid=154542 * Stkptr * (-230) Fix categories, no looping so it's total, also no conditionals
05:08:43 -!- ais523 has quit (Quit: quit).
05:27:24 <zzo38> I wanted to make up a pokemon battle with each side only having one pokemon remaining, of which the first one has a move with recoil damage (or Life Orb) that will knock itself out, and knock out the second pokemon iff it scores a critical hit, but the second pokemon is faster, and confused and will knock itself out if it attacks itself due to confusion but has a move that will defeat the first one in one hit.
05:28:57 <zzo38> Furthermore, the second one must be allowed to learn Roar or Whirlwind, and the first one does not have any move without recoil damage that it is able to select.
05:30:07 <zzo38> Do you know?
06:22:09 <zzo38> (Actually, it is probably not necessary for the confusion damage to knock out yourself in one hit, but it must knock you out when added to the minimum damage caused by the opponent's attack.)
06:36:16 -!- Sgeo has quit (Read error: Connection reset by peer).
06:43:12 <zzo38> On TI-92, newList(0) is an error even though logically it would be sense to be valid (to make a empty list), I think.
07:15:01 -!- Lord_of_Life_ has joined.
07:16:08 -!- Lord_of_Life has quit (Ping timeout: 268 seconds).
07:16:23 -!- Lord_of_Life_ has changed nick to Lord_of_Life.
07:42:17 <esolangs> [[Table]] M https://esolangs.org/w/index.php?diff=154550&oldid=148527 * Rdococ * (-47) /* Semantics */
08:31:40 <esolangs> [[Special:Log/upload]] upload * BerdiesLuvr * uploaded "[[File:L'toile Noire.png]]"
08:55:08 -!- b_jonas has quit (Quit: leaving).
09:42:23 -!- sirikon has joined.
09:43:12 -!- sirikon has quit (Client Quit).
09:43:33 -!- sirikon has joined.
09:44:50 -!- sirikon has quit (Client Quit).
10:36:29 -!- rodgort has quit (Ping timeout: 244 seconds).
11:15:16 -!- rodgort has joined.
11:32:58 -!- wib_jonas has joined.
11:34:36 <wib_jonas> re https://logs.esolangs.org/libera-esolangs/2025-03-24.html#lub the "Edg/" thing is correct, https://dpaste.com/C2XPZU8K2.txt has request headers I got from actual browsers on windows. These are headers for a request to an unknown host, so probably not quite the same as the typical headers that browsers send, where they already know they can send
11:34:37 <wib_jonas> HTTP2 or SSL or whatever.
11:35:06 <wib_jonas> the user agent string says: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/133.0.0.0 Safari/537.36 Edg/133.0.0.0
11:39:29 <wib_jonas> so yes, that has quite a few brand names in there. we should make one that also says Netscape Firebird MSIE Opera in the same line.
11:41:02 <fizzie> Latest charts: https://zem.fi/tmp/errors3.png -- nice stair-step pattern there. So still happening, sporadically.
12:09:43 -!- craigo has joined.
13:33:26 <esolangs> [[UserEdited]] https://esolangs.org/w/index.php?diff=154552&oldid=154100 * Hotcrystal0 * (+508)
13:33:45 <esolangs> [[UserEdited]] https://esolangs.org/w/index.php?diff=154553&oldid=154552 * Hotcrystal0 * (+1)
13:34:13 <esolangs> [[UserEdited/Versions]] https://esolangs.org/w/index.php?diff=154554&oldid=154081 * Hotcrystal0 * (+35)
13:44:36 -!- chiselfuse has quit (Ping timeout: 264 seconds).
13:45:25 -!- chiselfuse has joined.
13:57:59 <esolangs> [[User:JIT]] https://esolangs.org/w/index.php?diff=154555&oldid=154423 * JIT * (+336)
14:01:32 <esolangs> [[Fontmess]] https://esolangs.org/w/index.php?diff=154556&oldid=154408 * JIT * (+33)
14:13:22 <esolangs> [[Rivulet]] M https://esolangs.org/w/index.php?diff=154557&oldid=154547 * Rottytooth * (+43) /* Strand Types */
14:15:19 <esolangs> [[Language list]] https://esolangs.org/w/index.php?diff=154558&oldid=154548 * Rottytooth * (+14) Added Rivulet
14:16:39 <esolangs> [[User:Rottytooth]] https://esolangs.org/w/index.php?diff=154559&oldid=152628 * Rottytooth * (+31) Lang list
14:26:15 <esolangs> [[User:PrySigneToFry/Sandbox/Draft of EtPL]] https://esolangs.org/w/index.php?diff=154560&oldid=154501 * PrySigneToFry * (+838)
14:32:01 <esolangs> [[Rivulet]] https://esolangs.org/w/index.php?diff=154561&oldid=154557 * Rottytooth * (+84) /* Example */
14:32:16 <esolangs> [[Rivulet]] https://esolangs.org/w/index.php?diff=154562&oldid=154561 * Rottytooth * (+1) /* Example */
14:37:30 -!- FreeFull has joined.
15:00:01 <esolangs> [[User:Aadenboy/Zerons]] https://esolangs.org/w/index.php?diff=154563&oldid=154537 * Aadenboy * (-809) /* \sqrt{q}? */ hang on that's wrong
15:14:34 -!- amby has joined.
15:54:12 -!- ais523 has joined.
16:29:22 -!- wib_jonas has quit (Quit: Client closed).
16:29:31 <esolangs> [[Special:Log/newusers]] create * Ilikeundertale * New user account
16:54:11 <esolangs> [[Waduzitdo]] M https://esolangs.org/w/index.php?diff=154564&oldid=130315 * Krolkrol * (+123)
16:58:30 <korvo> Curses. I've seen the end of programming. I'm adding "set up a co-op that can fund the next stage of software development" to my topics to chat with my lawyer about.
16:58:54 <korvo> I've been chewing on "Galois Theory of Algorithms" https://arxiv.org/abs/1011.0014 and thinking about what it means for writing a program.
16:59:38 <korvo> Let's say that I'm hacking in some cruddy imperative "close-to-the-metal" ALGOL descendant. There are few automorphisms which fix the semantic behavior of programs in that language, because almost every statement is fully dependent on almost every other prior statement. We call this "data-dependent control flow".
17:00:46 <korvo> Yanofsky proposes three reasonable operations, and notes that we can take a quotient of programs based on the requirement that these operations commute. The operations are "bracket" (product), primitive recursion, and sequential composition.
17:01:26 <korvo> So, for example, if we have a program like `x += 1; y += 2;` then we can note that the lack of data dependency between the two statements allows us to find an automorphism respecting composition sending it to `y += 2; x += 1;`
17:02:13 <korvo> I'll skip the category theory. It's shown that we can decide equivalence of programs given bracket and composition, but *not* with primitive recursion.
17:03:07 <korvo> It's also known that the "initial operations" generated by these three operations include building every natural number, and thus that the automorphisms ought to respect those operations; a program that outputs 5 should never be considered equivalent to a program which doesn't output 5.
17:04:21 <korvo> Finishing the background, Yanofsky gives the punchline that the automorphism group under all three ops is the trivial group; there are programs whose behavior isn't equivalent to others under these three reasonable ways of combining programs.
17:05:04 <korvo> This isn't in any fancy computational model. It's Kleene's primitive-recursive functionals. The same stuff used in HOL or Cammy or Peano arithmetic.
17:06:39 <korvo> Here's what I just realized. Let some monad carry our imperative language's effects. The monad freely gives a Kleisli category, *which has composition*. We usually compute over sets, over which all monads are strong, so we freely *have a bracket*.
17:08:39 <korvo> So, given some equivalence of imperative programs, Yanofsky says that we also have an equivalence of arrows in the Kleisli category. But they also say that if we add primitive recursion for those arrows, then we can't compute the equivalence any longer.
17:09:46 <korvo> But primitive recursion in a category is merely repeated composition. So this means that we can't even compare equivalence of programs under indefinitely-repeated composition, which makes sense if you know about e.g. mortal-matrix problems.
17:13:37 <korvo> It's not quite a slogan, but: Sometimes a program will not be computably equivalent to any other program, even any other program which computes the same function.
17:14:53 <korvo> This is kind of like making Rice's theorem worse, and also kind of like an irreducible-control-flow lemma.
17:18:30 <korvo> ...It's also obviously a little wrong, in the sense that given some total program P, some composition c() and some identity program I, c(P,I) ≈ P, and even for partial P, c(I,P) ≈ P. So we need some sort of size to give order and metric, and then restate "sometimes, the *smallest* program computing some function..."
17:21:23 <korvo> Anyway, my big insight is that we currently write *programs*, and we have trouble not doing that because sometimes we can't (efficiently??) express certain *functions* without getting into machine-specific details.
17:22:22 <korvo> Obviously we want to write *algorithms* since the 60s and *functions* since the 80s. Yes. But how? We know programs are surjective on algorithms are surjective on functions, but that doesn't somehow ease the task of actually picking out a program for a particular function.
17:24:06 <korvo> Functions and algorithms both give genuine categories; we *can* decompose them with the three ops. But we can't do that for programs. So instead our compilers all work in subsets of the languages that support "calling conventions" and "procedures" and other ways of reusing and compressing code.
17:25:47 <korvo> Since we can't stop paying those costs, we call them "interpretative overhead" and brag about how our "Jones-optimal partial evaluator" can remove them, asymptotically, probably. We complain about how "intrinsics" perform an object-register mapping and never consider whether ORMs are necessary.
17:30:37 <korvo> This is also the root of the split between MLs. I've long struggled to see why Haskell, OCaml, etc. have efficient runtimes and also aren't eta- or beta-equivalent, and meanwhile Idris, Adga, Coq, etc. are pure type systems that are not fast and must undergo extraction for speed. If we note that the latter have a vague concept of program equivalence, then it's somewhat obvious what's going on: the latter are program-oriented, not function-oriented
17:30:37 <korvo> !
17:31:48 <korvo> ...Yes, thanks IRC. Anyway, this suggests that e.g. Rust in the former camp and therefore can't save software development because it lacks nice composition properties. It also suggests that e.g. Cammy is in the latter camp, and indeed it's only fast because of extraction with a JIT. I'm open to frame challenges but this is explaining more than it's obscuring.
17:39:41 <ais523> korvo: reading your messages now
17:41:06 -!- ajal has joined.
17:41:06 -!- amby has quit (Read error: Connection reset by peer).
17:41:46 <ais523> korvo: I think I have my own thoughts about this sort of thing, but they don't contradict yours
17:43:19 <ais523> recently I've been viewing programming through a lens of a) trying to make subprograms into pure functions as much as possible, with any effects encoded in the input and output, and b) where possible, limiting the syntax to things that have a nice canonical form that optimizers can operate based on
17:44:10 <ais523> one of the biggest mistakes programming languages make is to accidentally make a detail that should be irrelevant observable
17:44:39 <ais523> and this is a type of mistake that normally can't be backwards-compatibly corrected
17:45:06 <ais523> and I think the reason it happens is that the languages start by using algorithms to specify functions, and that makes it easy to accidentally capture an implementation detail
17:47:04 <ais523> I've also been considering writing a blog post about why Rust reborrowing (the thing that is special about &mut) seems to cause most of Rust's major problems, even though it's hard to deal without – and I think that is consistent with your viewpoint too, as reborrowing is the Rust construct that behaves least like a pure function
17:49:18 <korvo> ais523: Yeah. I think that we often couch all of this in terms of purity, which happens to generalize Yanofsky because functions are definitionally pure. Certainly, implementing pure type systems is a special case of all of this.
17:49:57 <ais523> I think purity is the most marketing-friendly way to talk about the concept, at least – even people who aren't deeply into type theory can have an idea of what it means
17:50:29 <korvo> To stereotype my conversations with implementors of languages like Idris or Coq, they have a distinction between "computation" in terms of simplifying/rewriting terms, and "extraction" in terms of actual compilation.
17:51:18 <korvo> This also shows up in partial evaluation; in e.g. lambda calculus, there's a real distinction between beta-reduction of symbolic terms and the effective beta-reduction performed by call/return semantics.
17:51:41 <korvo> Er, like CBV semantics obviously, but also calling by partially-evaluated reference.
17:52:33 <ais523> I am increasingly thinking that references should be an implementation detail rather than something explicitly managed by the programmer
17:53:06 <ais523> an obvious example is the distinction between &T and &&T in Rust, which is necessary for type-system reasons but causes runtime overhead for almost no good reason
17:53:39 <korvo> ais523: Your (b) is definitely both desirable and something for which we have a type-theoretic ceiling. It's known that Cartesian-closed categories have a canonical form for arrows on their own, but if we add sums (bicartesian) or natural-numbers objects, then equivalence is undecidable.
17:53:54 <ais523> (the only thing that prevents you optimising &&T into a copy of the &T is that it might get converted to a raw pointer, which most programs don't care about doing)
17:54:40 <ais523> I wouldn't expect practically useful programming languages to have decidable equivalence, because that places huge constraints on the computational class that likely rule out a wide range of useful programs
17:54:43 <korvo> The first part of that sounds a lot like "no Turing-complete language has genuine sums and products", but it comes from a totally different angle: there's no finite axiomatization of rewrites for bicartesian-closed categories.
17:55:09 <ais523> but maybe large portions of them can
17:56:20 <ais523> in a language I am working on, I realised that programs are conceptually much easier to express if the virtual-machine evaluation order used in the specification is very different from the evaluation order of the compiled assembly language
17:56:40 <ais523> although in order to pull that off, you need a lot of statements to be able to commute
17:58:17 <ais523> I suspect the construct that most commonly causes program equivalence to become undecidable is the unbounded loop (whether done via recursion or imperatively)
17:58:25 <ais523> or, well, Blindfolded Arithmetic is enough to demonstrate that
17:58:30 <ais523> don't even need any control flow, just the loop
17:58:37 <korvo> Yeah. And we can see the gaps here. For example, when we compile a language with pairs ("bracket") into stack-machine code, we usually get to choose whether to evaluate the left-hand or right-hand component first. That choice is a compiler choice and often commutative.
17:59:11 <ais523> yes – and ideally a well-designed language would make it impossible to write a program where the choice was observable
17:59:25 <korvo> But that suggests that, as a compiler, we're embedding a possibly-trivial automorphism group into a non-trivial group.
17:59:35 <ais523> for example, you could reject programs in which both arguments to the pair constructor had side effects
18:00:12 <ais523> or use a type system to do the same thing
18:00:45 <korvo> And Yanofsky showed that the non-trivial automorphism groups correspond to equivalences of programs which don't fully respect bracket, composition, or recursion. Since bracket is freely respected here, and since composition is usually implied in homomorphic compilers, this means that e.g. Cammy's compiler must create some observable non-equivalences of recursion.
18:00:46 <ais523> I don't think pair constructors are special here, the same thing is true of function arguments in general
18:01:00 <korvo> And indeed there are some test Cammy programs that provoke StackOverflow in the underlying JIT.
18:02:10 <korvo> Pair construction is just a convenient example. We could use any categorical structure that's implied by the three ops.
18:02:16 <ais523> I am not surprised that lack of equivalence is mathematically unavoidable, and agree that recursion/loops is the best place for it to happen
18:03:19 <korvo> I used to be a little surprised that OCaml and GHC Haskell don't have eta-equivalence. Angry, even. Now I'm thinking that it's inevitable if they want to automatically and homomorphically generate efficient programs.
18:04:40 <korvo> (It doesn't matter that Hask isn't a category! Yanofsky expects languages to only give graphs. For categories, the structure's already too nice to have interesting Galois theory.)
18:06:48 <ais523> which version of eta-equivalence are you talking about? the general one which lets you substitute any function for another function with the same I/O behaviour, or the specific one that only converts between f and \x -> f x?
18:07:34 <korvo> Either TBH, but Ocaml specifically lacks the latter, and GHC has historically had many bugs relating to it.
18:08:10 <ais523> right – Ocaml specifically has observable evaluation order which breaks that equivalence – Haskell I would have expected to uphold it, though, and it sounds like it's trying to?
18:12:20 <korvo> Well, the automorphism groups don't preserve computational complexity. GHC bugs usually aren't miscompiles but performance problems; the relevant part of the compiler is their "lambda lifter", which also decides whether to eta-convert expressions to add/remove laziness.
18:12:57 <ais523> ah, I see
18:13:14 <korvo> ...I mean, some of the automorphism groups do preserve complexity, but anything involving loops appears to not be that easy. Yanofsky suggests that natural numbers are some sort of special semantic object that brings in the complexity.
18:13:56 <ais523> I think that might be backwards – in the sense that I suspect the same sorts of things that introduce complexity can also be used to implement natural numbers
18:14:29 <ais523> this is why The Waterfall Model is so good at proving TCness – pretty much anything that is complex enough to be TC can implement natural numbers more or less directly, and TWM doesn't need anything else
18:15:30 <ais523> and the basic reason is that anything TC has to be able to store arbitrary amounts of data, and "the amount of data" is typically something you can use to represent a number
18:15:45 <esolangs> [[Table]] M https://esolangs.org/w/index.php?diff=154565&oldid=154550 * Rdococ * (-156)
18:16:44 <esolangs> [[Table]] M https://esolangs.org/w/index.php?diff=154566&oldid=154565 * Rdococ * (+1) /* Truth machine */
18:26:20 <korvo> That's definitely a way of looking at it, although I worry that it's too focused on the computer as the source of the complexity. After all, we usually define complexity in terms of asymptotes which extend beyond any single computer.
18:26:45 <ais523> oh, I'm thinking in terms of abstract machines here, not physical computers
18:27:49 <korvo> Oh! I was always thinking in terms of abstract machines, sorry. Like, "the computer" might well be Kleene's formalism, which also assumes the existence of natural numbers.
18:28:39 <ais523> I guess I have a semantics-first way of thinking about TC proofs because that's how most esolangs are defined
18:29:37 <ais523> but I also have a suspicion that taking asymptotes/limits of programs is a dangerous/risky operation in the sense that it may not be well-defined – it can definitely cause a huge expansion to a computational class
18:29:45 <korvo> No worries. I only think so much about abstract syntax because I'm always writing cruddy compilers.
18:30:42 <ais523> for example, suppose you have a language equipped with a random oracle; if you use it it returns either true (with probability p) or false (with probability 1-p) – you can give that to a TC language and it's still TC (just with potentially random behaviour) – but if you take the limit as p goes to 0, it becomes able to solve the halting problem
18:30:54 <ais523> (even though the actual value of p=0 would be entirely useless)
18:31:54 <korvo> Right. We can always define limits of categories, and a "star graph" category can serve as the appropriate diagram; it's got arrows 0 → 1, 0 → 2, 0 → 3, etc. But programs only give a graph, not a category, under typical rewriting rules.
18:33:04 <korvo> Ah, yeah, the limit has to be WRT some categorical structure, and I'm not sure how the complexity class R behaves when we do that. (I've failed to understand Turing categories like three times; they are not easy constructions.)
18:33:16 <ais523> I would not expect programs to usefully give a limit ("useful" in the sense that it corresponds to any operation that you can actually calculate or implement or that gives insight for reasoning about programs)
18:35:09 <korvo> Well, we can get one insight: what are some trivial complexity bounds for a given program? If we can determine which algorithm it implements, then we can shift to the category of algorithms, optionally give it a non-uniform representation, map it onto a star graph, and take the limit in some metric space.
18:36:04 <korvo> That determiniation must be the hard part, because we usually just kind of handwave over the rest of it!
18:36:58 <ais523> I agree – you can prove that by "contradiction" in that if we could easily tell what algorithm an arbitrary program implemented, it would be trivial to optimise it
18:38:05 <korvo> In the sense that, since programs are surjective on algorithms, once we've picked out a representative (which we established earlier is hard) then we can hand-optimize that representative. And recognizing an algorithm is "idiom detection" from compiler engineering.
18:38:40 <ais523> there are limited languages in which you can do that, e.g. with a language that has integer constant and variables and + - × and non-recursive functions but no control flow or loops, you can canonicalise functions into polynomials
18:39:11 <ais523> and one thing that I'm interested in is expanding that sort of language into something that still canonicalises but can express a greater range of useful programs
18:39:23 <ais523> e.g. can we add bitwise operators and bitshifts to that?
18:39:26 <korvo> Oh, but also: for a given program, we know at least one algorithm it implements, because we know one function it implements: it implements the effects encoded by the Kleisli category of the effect monad! And algos are surjective on functions.
18:40:11 <korvo> Further we know how to use abstract interpretation to concretely transform a program with loops into a worst-case complexity analysis, and we teach it to undergrads when we show them the triply-nested loop for matrix multiplication.
18:40:38 <ais523> idiom detection annoys me – I a) intuitively feel like it's the wrong approach but b) don't know any better alternatives
18:41:11 <ais523> I think the reason I think it's the wrong approach is that it will usually optimise some programs differently from other programs that are clearly (to a human) equivalent
18:41:19 <ais523> especially if it isn't confluent, which it isn't usually
18:41:44 <korvo> Hm. That's a good question and one that we'd need a ring theorist to really take apart. It sounds very familiar, maybe because somebody stubbed a Diophantine-equation article recently.
18:42:52 <korvo> Right, we know that tile-oriented instruction selection is NP-hard to do well, and idiom detection is -- if we allow individual instructions to be idioms -- therefore NP-hard too.
18:43:10 <ais523> come to think of it, just being able to prove that a set of tree-rewriting optimisations is confluent would be useful, and seems theoretically possible in at least some cases
18:43:29 <ais523> oh, NP-hardness, that makes sense
18:44:07 <korvo> To the point where I always remember NOLTIS in terms of what it stands for: Nearly-Optimal Linear-Time Instruction Selection. Wow! And it uses tiles explicitly, too.
18:45:24 <ais523> related: I recently came across a problem related to instruction selection for which the abstract version is this: you have a number of known finite sets, and need to choose one element for each set – for each pair of elements in different sets there is a (possibly zero) cost, and you want to make a selection that minimises the total cost across all pairs of elements you selected
18:46:02 <ais523> this looks really reminiscent of an NP-complete problem but I haven't immediately figured out which one – in particular it reminds me a lot of Boolean satisfaction but the restriction to pairs makes it hard to implement anything other than 2-SAT directly
18:46:17 -!- b_jonas has joined.
18:46:56 <korvo> It reminds me of how Sudoku is solved, and Hamiltonian cycles are doable in a similar setup. It becomes 0-1 programming when we set the cost to be a Dirac delta (0 when i = j and 1 otherwise).
18:47:44 <korvo> Pair-of-elements is like a 2-graph; it's a double-decker where each edge is valued in the *elements* of nodes and not just the nodes themselves.
18:48:05 <ais523> oh you're right, I think it does do Hamiltonian cycles directly (the sets are the edges from each vertex, and there's a high cost for choosing two edges that don't connect to each other)
18:48:39 <korvo> It reminds me of Sudoku because I bet Norvig's heuristic is decent on it: pick the smallest set and generate the collection of all possible pairs in/out of it, recursively eliminating possibilities.
18:49:06 <ais523> yes, it looks like the sort of NP-complete problem for which there are good approximations
19:05:41 <korvo> Okay, I'm back to the three ops. Was thinking about how UNIX kernels used to be handwritten. A kernel can take programs and perform the three ops on them. We'll only consider input and output effects, like with UNIX pipes.
19:07:00 <korvo> Then composition obviously holds. Bracket-pairing should work too. Recursion needs help from a system daemon or a jank series of syscalls but should be possible. So, in a sense, one of the jobs of a (UNIX) kernel is to promote programs into algorithms; the individual programs might not compose, but the computer does as a whole.
19:12:21 <korvo> This is a pretty stark contrast with the "page of code" school, e.g. from VPRI/STEPS, which contends that we should write a single low-level program that is about one page long and merely bootstraps a nicer programming environment, about five or six times.
19:13:15 <ais523> I'm not convinced that the "page of code" strategy is necessarily a sensible way to do programming, but it sounds like a lot of fun – I might try that some time
19:13:40 <korvo> Sure, maybe the first page of code gets us into a nice low-level image-based Forth-ish or Tcl-ish environment. Maybe the next page of code sets up a GC. Maybe the next page establishes platform-specific hooks and prepares for a userland.
19:14:28 <korvo> But at some point, we want the computer to transition from running one program to providing some sort of program-running environment and giving us various meta powers, and in particular we might want to compose two programs. But programs don't compose.
19:14:53 <ais523> I think the page-of-code approach eventually hits Kolmogorov complexity limits
19:15:18 <ais523> you can create increasingly golfy programming environments, but even the golfiest won't be able to fit a program that's too complicated into a single page
19:15:44 <korvo> For sure. I also think it's based on some very optimistic assumptions. I think I've documented Nile on-wiki as an example of a STEPS language; it's part ML and part APL, and it was designed to express GPU shaders very succinctly.
19:15:50 <ais523> so you'd have to start encoding the actual program you wanted into the language, rather than just writing the best generic language you could
19:16:00 <korvo> And that still takes maybe two pages to implement Porter-Duff in toto.
19:16:12 <ais523> and at that point you're losing the benefits from the approach
19:16:33 <ais523> it could make sense for early bootstrapping, though, as long as you abandon it once it stops working
19:16:36 <korvo> Or at least the benefits start shifting to the eDSL benefits.
19:17:44 <korvo> ...Sorry, what a bad sentence, and I can't figure out how to rephrase it.
19:19:04 <korvo> Like, we want to make simple high-level statements like "boot the system", and if those get rephrased to machine-parseable `boot(theSystem)` statements then that's not necessarily worse. Machines can check their well-formedness, do a model-check, etc.
19:19:35 <korvo> Pantagruel might be a good example of that sort of language, but the reference implementation's in Janet and I've been unable to adopt it because Janet's custom build system is jank.
20:02:00 <esolangs> [[Clockwise Turing machine]] N https://esolangs.org/w/index.php?oldid=154567 * Stkptr * (+2710) creat
20:07:15 <esolangs> [[FOSMOL]] M https://esolangs.org/w/index.php?diff=154568&oldid=153872 * Aadenboy * (+46) /* Example macros */ oh duplication is simple actually
20:08:39 <esolangs> [[Pantagruel]] N https://esolangs.org/w/index.php?oldid=154569 * Corbin * (+1818) Stub a desirable specification language.
20:13:31 <korvo> ais523: Okay, so I'm still thinkin' way too hard, but here's an example of a spec language that isn't as encumbered as Dafny or as austere as TLA+. Pantagruel documents are basically little diagrams of a simple type theory. We could imagine interpreting them in a world of functions.
20:14:55 <korvo> We could imagine calculating a program for such a spec. First, figure out a candidate function; we'll just take the least upper point of the diagram, requiring every property to abstractly hold. Then pull back once for an algorithm, and twice for a program.
20:16:02 <korvo> Hopefully we only get stuck at that second stage, since that's the one that we thought was hard earlier. Which algorithm implements the least upper point? Well, any algorithm that checks each property by examining the input will do. We can choose the cheapest one through standard algebraic optimization.
20:18:32 <esolangs> [[Clockwise Turing machine]] https://esolangs.org/w/index.php?diff=154570&oldid=154567 * Stkptr * (+29)
20:20:14 <korvo> (I'm not handwaving that, BTW. There's a wonderful book using equational reasoning to work through how to *compute* algorithms with desired properties: https://di.uminho.pt/~jno/ps/pdbc.pdf "Program Design by Calculation")
20:20:34 <esolangs> [[Clockwise Turing machine]] https://esolangs.org/w/index.php?diff=154571&oldid=154570 * Stkptr * (-29) Undo revision [[Special:Diff/154570|154570]] by [[Special:Contributions/Stkptr|Stkptr]] ([[User talk:Stkptr|talk]])
20:21:24 <korvo> (I guess this is covered in the current Chapter 7, "Contract-oriented Programming")
20:29:11 <ais523> I'm kind-of stuck confused between different levels of abstraction, as usual when I'm thinking about category theory
20:30:07 <korvo> Totally fair. Galois theory is very difficult for me in general, and I think I'd be completely lost if it weren't for programs => algos => functions right now.
20:31:23 <ais523> is it the case that with your terminology, algorithm→program is "hard" because the program captures inessential details that the algorithm doesn't capture, so you can end up producing an undesirable program by forcing the wrong details?
20:33:35 <korvo> It's hard to find an efficient program quickly. NP-hard in the specific case of idiom recognition, at least. We can find a decent program using homomorphisms and generic tools, but it'll do "slow" things that respect the symmetries of composition, like saving registers for calling conventions or spilling them for inlined subroutines.
20:34:16 <ais523> OK, yes, I think I understand now – finding an arbitrary program isn't hard, but finding one with desirable properties is
20:34:31 <korvo> The faster programs might exist, but they won't be found programmatically by a symmetry-respecting tool like a BURG. They'll have to be found by something a little more brute like a superoptimizer.
20:35:30 <ais523> oh, this reminds me – I think at least inefficiencies related to cross-subroutine register allocation are probably fixable without a full rethink of the way compilers work
20:36:29 <ais523> I think register allocation is done too early in typical compile pipelines – it seems like the sort of thing that could reasonably be left as late as possible
20:36:32 <korvo> A slogan that might help: A concrete category is a path category on some graph (called its "quiver", because it holds arrows, haha~ ugh). This sort of program generation has to follow paths from one possible optimized program to another, and those paths are defined WRT some underlying symmetry due to their inner control flow.
20:37:08 <ais523> and that would open the possibility of each subroutine being built with its own ABI, or maybe even multiple ABIs, that are propagated to do a more optimal register assignment
20:37:59 <korvo> In order to find one of those uniquely efficient programs, the generator has to find a similarly unique path of applied optimizations; in particular, *no optimizations may commute* or else there are multiple underlying paths.
20:38:12 <korvo> Yes, exactly.
20:40:54 <korvo> ais523: Oh sheesh, that reminds me of a blog post I did *not* write. If we ignore call/return and stack semantics, the remaining opcodes of most processors give a structure called an "operad".
20:41:57 <korvo> Terminology around these things is awful. An operad is a category that takes multiple inputs and has multiple outputs (a "multicategory"). We want to let some registers be general and others specific (they have "colors", again ugh), and we can apply arguments in any order.
20:42:43 <korvo> But once that's all set up, an operad exactly encodes the dataflow of a program. To evaluate some operation, first evaluate its arguments and then perform the operation itself.
20:43:46 <ais523> I think most practical processors nowadays are moving away from special-purpose registers, except a) stack/subroutine-related registers, b) the instruction pointer, and c) the fact that some registers are typically larger than others
20:44:06 <korvo> So the *processor* has symmetries! It's not actually running programs, it's running algorithms! No wonder the modern CPU basically has an on-die JIT.
20:44:52 <korvo> But only if there's a lot of GPRs. Like maybe two or three GPRs, at least. So maybe the 6502 has genuine programs.
20:45:00 <ais523> oh, that's more true than you imagine – registers nowadays are primarily a convention for expressing SSA in a fairly compressed/golfy form and get decompiled into SSA in the processor
20:45:56 <ais523> the only time a value is stored in an actual physical register is if you don't mention the register for ages (around 100 instructions) and the compiler needs to remember the assignment of value to register name long-term
20:46:06 <ais523> the rest of the time it's all virtual SSA registers, and there are hundreds of them
20:47:44 <korvo> But 6502 is different. Or Z80, whatever. Every primitive operation only operates on specific registers, and they usually can't be substituted for each other. So there's practically no commuting sequences of code in terms of register effects.
20:47:54 <ais523> yes
20:48:41 <korvo> So there's no point to JIT. Or the point would only be to JIT an emulator if we're virtualized, I guess. But the hardware itself, not able to commute operations, executes programs.
20:48:43 <ais523> microcontrollers can be different again – the one I used had one traditional register which was an implicit operand of basically every command, and the other operand was memory, but it had a huge number of special-purpose memory addresses
20:49:23 <ais523> e.g. to do an indirect access to memory, you would store the address at, IIRC, memory location 4 and then read/write the value of memory location 0, which was defined as an alias to whatever memory location 4 was poining to
20:49:37 <ais523> so from one point of view it had only one register, from another point of view it had hundreds
20:49:58 <ais523> it was sort-of like a transport-triggered architecture, except that it had normal arithmetic instructions as well as the memory-mapping
20:50:18 <ais523> and likewise normal control flow instructions
20:50:53 <korvo> So nothing would commute, since the implicit register creates data dependencies. Although, if we add standard read-write memory semantics for the rest of the memory bank, then some operations would commute again.
20:52:18 <ais523> oh, it was worse than that – there were only a couple of hundred bytes of memory total, and it was documented as expected behaviour to use special-purpose registers for general-purpose purposes in order to get a bit more memory
20:52:38 <korvo> So, in some sense, equipping a concrete machine with some I/O -- even just memory -- is loosening it up to only evaluate some algorithm, rather than a specific program.
20:52:41 <ais523> many of them would, e.g., have enable flags in a different register and act like regular memory when disabled
20:52:51 <korvo> And that makes sense if we think about memory controllers having many different tools for speeding up transfers.
20:53:37 <ais523> I found some of my own asm code for this, it's mostly mov instructions with occasional arithmetic and control flow
20:54:04 <ais523> and instructions to set/clear/toggle single bits directly in memory, without needing to go via the register
20:54:50 <ais523> I wasn't expecting this to have RMW instructions but those ones do make sense, given how the processor works (and, come to think of it, might have been implemented without a specific read cycle via using memory that could be written a single bit at a time)
20:55:28 <korvo> *But* if the program is written to require a specific memory-controller behavior, then that would still be targeting a concrete machine. So it turns out that when we want to work with "a machine", we need to be fairly specific about whether it's evaluating programs or algorithms, and we can only tell by doing some Galois theory to see how strongly Hyrum's Law applies.
20:55:39 <ais523> ah no, it appears to have actual RMW instructions, there are some "add register to memory", "increment/decrement memory" in here
20:56:04 <korvo> Oh, that's nice of them.
20:56:27 <ais523> korvo: my guess is that practical programming and practical computers don't try to make a hard distinction between programs and algorithms in your sense
20:56:41 <ais523> and this occasionally leads to trouble but mostly works well enough for people to not be bothered by it
20:58:13 <korvo> ais523: I agree. And also, more nuanced: a hardware-maker is trying to turn programs into algorithms, which requires new symmetries to be exposed so that some operations can commute. A programmer is trying to turn algorithms into programs, which requires breaking symmetries and choosing a good order of operations.
21:00:34 <korvo> A compiler author must do both, first by taking a program as representative of a class of algorithms, then by viewing that class as an equivalent category to some other class, and finally by choosing an efficient representative program from the class.
21:03:28 <korvo> Those almost sound like slogans. Needs a bit more cooking.
21:05:15 <esolangs> [[Special:Log/newusers]] create * OskuDev * New user account
21:06:44 <b_jonas> ais523: I may be misunderstanding something, but I don't think Hamiltonian cycles can be easily reduced to the optimization problem that you gave.
21:07:31 <korvo> ais523: Sorry for the headache; thanks for the rubber-ducking. I think that I have the bulk of the insights for the post; the rest of it is just verifying theses like "every modern language expresses exactly one of: programs, algorithms, or functions".
21:08:06 <b_jonas> I think it's NP-complete nevertheless, that's just not the right way to prove it
21:08:52 <b_jonas> I think you have to reduce chromatic number instead, or maximal matching, or something like that.
21:09:15 <b_jonas> The reduction that you mention doesn't work because it doesn't enforce that there's only once cycle instead of a union of disjoint cycles.
21:12:07 <korvo> b_jonas: Ah curses, and Sudoku doesn't encode nicely for the same reason. Nice catch, thank you.
21:12:27 <ais523> b_jonas: ah yes, the construction doesn't seem to handle cycle count
21:13:03 <ais523> I actually considered pinging you at that point in the conversation, it seemed like your sort of problem
21:14:17 <esolangs> [[Bi-tag system]] N https://esolangs.org/w/index.php?oldid=154572 * Stkptr * (+2341) crate pag
21:14:28 <ais523> the chromatic number decision problem (more or less than some specific value) seems to work fairly directly? just have a high cost for two adjacent vertices being the same color
21:15:41 <korvo> My entire thought process about it was wrong, sorry. In particular, if the cost function is always 0 or 1, then it's poly-time whether there's a zero-cost path. I was completely misled by superficial details.
21:15:53 <esolangs> [[Tag system]] https://esolangs.org/w/index.php?diff=154573&oldid=115677 * Stkptr * (+20) /* See also */
21:16:20 <ais523> korvo: I think even the only-0-or-1 case can solve the chromatic number decision problem
21:16:44 <ais523> which is NP-complete for, e.g., determining whether a graph is 3-colorable
21:16:49 <b_jonas> correct, that's my sort of problem
21:17:07 <b_jonas> and yes, chromatic number works well for this
21:18:45 <esolangs> [[Esolang:Introduce yourself]] https://esolangs.org/w/index.php?diff=154574&oldid=154535 * OskuDev * (+94) /* Introductions */
21:20:34 <korvo> Oh, okay, yeah. The 0-1 case comes out as the path problem for inhabited finite relations, which is canonically NP-complete. I see the chromatic-number encoding too.
21:21:04 -!- craigo has quit (Quit: Leaving).
21:29:39 <b_jonas> as for read-write-modify memory, the silliest language in that respect is https://esolangs.org/wiki/Y86 . it starts normal: most instructions can read an operand from memory but only write to a register, there's of course an ordinary store instruction. but there's one exception: you can one's complement memory in place. I don't know why it's designed that way.
21:31:50 <korvo> It looks like it's the only unary operation? Could be a simplification due to how addressing modes are (expected to be) decoded in hardware.
21:32:46 <b_jonas> korvo: yeah, but why would they have that unary instruction when there are so few instructions?
21:32:59 <korvo> At the time, the same sort of operation on a Z80 or 6502 could only happen on the accumulator X, I think? So maybe it went from an op with an implicit address to an op with explicit addressing.
21:33:11 <korvo> b_jonas: I don't know. Retrospect is puzzling.
21:33:48 <zzo38> I think that there may be benefits of not having any instruction that both read and write memory; some are read and some are write but not both.
21:35:19 <b_jonas> I mean there's kind of a reason, which is that to some extent y86 is based on 8086, which has special instructions for one's complement, but still, it has lots of other special instructions too
21:40:42 <zzo38> (The benefit might be that the cycles of the instruction should be more clearly, e.g. instruction cycle, calculation cycle, memory cycle. That way you should not need multiple memory cycles, but calculation cycles might be able to run at same time than other instructions do in some cases, since the calculation cycle does not need to access memory.)
21:42:13 <b_jonas> a more common pattern is https://esolangs.org/wiki/Viktor%27s_amazing_4-bit_processor and https://esolangs.org/wiki/Lawrence_J._Krakauer%27s_decimal_computer , both of which have arithmetic instructions that get one operand from memory and the other from accumulator and store to accumulator, and have two store instructions: one to store the accumulator and one to store the program counter (for
21:42:19 <b_jonas> subroutine call).
21:49:57 <ais523> korvo: on the 6502, A is the register that supports most operations; X and Y are limited to moves, increment/decrement, and being used as an address
21:50:20 <ais523> I'm not as familiar with the Z80 but am not sure it even has an X
21:50:36 <ais523> (I learned to program on a 6502, originally, so it's the first platform I learned)
21:51:52 <ais523> b_jonas: huh, I'm familiar with two standard ways to do subroutine calls in hardware and that isn't either of them
21:52:25 <ais523> although I vaguely remember it being used by some early programming languages that didn't support recursion – the location of the caller would be stored in a static variable of the callee
21:53:30 <ais523> it crosses my mind that you don't really need "store/push current address" if you have "store/push immediate", unless you're using some sort of ASLR – but it might be useful anyway due to the instruction being shorter / needing fewer immediate bytes
21:54:29 <b_jonas> ais523: well these are small toy processors. this wouldn't be very practical in a real machine because you have to know where the return instruction is to call the function, which makes it hard to change programs. optimized programs that ignore the published API and refer to addresses of specific subroutines in the kernal would hardcode not only the entry but the exit address. that would make it very
21:54:35 <b_jonas> annoying to update the ROM.
21:56:49 <ais523> b_jonas: IIRC the convention was to store the return address immediately before the called address
21:56:58 <ais523> (obviously this requires program memory to be writable)
21:57:36 <ais523> back in the days of old languages that didn't support recursion, presumably on processors that didn't have stacks because otherwise they would have used those instead
21:58:34 <ais523> ooh, even more fun – you don't need an indirect jump instruction to return, you can have a goto-immediate instruction at a known location and, when calling a function, just overwrite the immediate to decide where to return to
21:59:07 <ais523> this seems like it might potentially be a useful technique for implementing VMs in esolangs?
22:00:32 <b_jonas> all three of Knuth's processors (that I documented) have a call instruction that puts the return address into a register, and then the called subroutine can use that register.
22:04:23 <b_jonas> ais523: yes, Viktor T. Toth's CPU and Krakauer's language do that, they don't have indexing on any instruction, just modify the address field of the instruction in memory. and Krakauer's language actually adds the same trick to this as MIX does: the store program counter instruction actually only modifies the address field of the memory word, keeps the instruction field unchanged. Viktor T. Toth's CPU
22:04:29 <b_jonas> has nybble-granular memory so you can just directly address the address field, just like you can do on the 6502.
22:05:31 <ais523> "directly address the address field" reminds me of Redcode
22:06:01 <b_jonas> MIX also has indexing with index registers, and you can use that for an indirect jump, but subroutine returns don't do that too often. you can only store the return address to memory directly, it's one more instruction to load it back to an index register. can still be worth if you have many return statements from the same subroutine of course.
22:07:20 <b_jonas> the 6502 and BMOW-1 have indexing with index registers, but even more so than MIX they encourage you to also use self-modifying code for indirect memory access.
22:10:08 <ais523> hmm, that's interesting – the 6502 doesn't really seem to need self-modifying code for that
22:10:09 <b_jonas> whereas https://esolangs.org/wiki/Apollo_Guidance_Computer has an indexing instruction that loads an index from memory and it will be added to the next instruction between fetching and executing it. it works for almost any instruction. so it's even more general as MIX's indexing mechanism.
22:10:42 <ais523> you can indirect via a memory address plus a register, and can add the register either to the address being indirected via or the address loaded indirectly
22:11:19 <b_jonas> ais523: in the 6502 the indexed addressing modes use the X or Y registers as an index, and those registers are one byte long, but the address space is two bytes long, and sometimes you want to index with more than just a byte
22:11:24 <ais523> I do like the "add to next instruction" for a short-pipeline processor, it's like you get the benefits of self-modifying code without needing to actually modify the program in RAM
22:11:54 <ais523> b_jonas: yes but you still can indirect via two bytes read from memory
22:12:05 <ais523> so you don't need to self-modify code, just store the address you want in RAM
22:12:18 <ais523> (and you still get to add one byte of offset to it)
22:12:42 <b_jonas> yes, you don't *need* self-modifying code, but self-modifying code is very often the easiest, at least if you have a non-small amount of memory
22:12:55 <ais523> you can even do jump tables in a single instruction (although there's no automatic doubling of the offset like x86 can do, so you probably need another instruction to double the register first)
22:13:25 <ais523> I didn't program on banked 6502 implementations but they are very common, to get more than 64KiB of program+data memory
22:13:48 <ais523> and I suspect they would make self-modifying code even less useful, especially as code is usually executed from ROM on those
22:14:39 <b_jonas> yeah, in this case by "non-small" I just mean not just like 128 bytes of RAM plus 2 kilobytes of ROM like the Atari 2600 has, but enough RAM that you actually want indexes longer than a byte sometimes
22:15:35 <ais523> b_jonas: yes – but my point is that you can store the address into data RAM and indirect via it, or you can store the address into program RAM (if you even have any) and have it decoded as part of the instruction, and the former is neater and doesn't seem to have any downsides by comparison?
22:15:41 <ais523> unless you need a double indirection
22:18:22 <b_jonas> ais523: ok, I guess I was too strong above. probably you only want self-modifying code for a minority of indirect or indexing.
22:18:38 <b_jonas> the indirect addressing modes are good for most cases
22:32:50 <esolangs> [['Python' is not recognized]] https://esolangs.org/w/index.php?diff=154575&oldid=154516 * Stkptr * (+35) It can never pop, so the stack is effectively 2 symbols deep
22:51:02 <esolangs> [[Special:Log/upload]] upload * Buckets * uploaded "[[File:Fontpride Logo.png]]": This is the logo for Fontpride/<span style="background-color:#FFAFC8;color:white;">F</span><span style="background-color:#74D7EE;color:black;">o</span><span style="background-color:#613915;color:white;">n</span><span style="background-color:#E40303;color:black;">t</span><span st
22:52:09 <esolangs> [[Language list]] M https://esolangs.org/w/index.php?diff=154577&oldid=154558 * Buckets * (+557)
22:52:52 <esolangs> [[User:Buckets]] M https://esolangs.org/w/index.php?diff=154578&oldid=154525 * Buckets * (+556)
22:53:52 <esolangs> [[User:Buckets]] M https://esolangs.org/w/index.php?diff=154579&oldid=154578 * Buckets * (+1)
22:54:33 <esolangs> [[Fontpride]] N https://esolangs.org/w/index.php?oldid=154580 * Buckets * (+12518) Created page with "{{wrongtitle|title=<span style="background-color:#FFAFC8;color:white;">F</span><span style="background-color:#74D7EE;color:black;">o</span><span style="background-color:#613915;color:white;">n</span><span style="background-color:#E40303;color:black;">t</span><span st
22:59:04 <esolangs> [[Batch No For]] https://esolangs.org/w/index.php?diff=154581&oldid=139891 * Stkptr * (+98)
23:05:59 -!- Sgeo has joined.
23:24:15 <esolangs> [[Talk:]] https://esolangs.org/w/index.php?diff=154582&oldid=108008 * Hotcrystal0 * (+61) /* Unstack */ new section
23:24:22 <esolangs> [[Talk:]] https://esolangs.org/w/index.php?diff=154583&oldid=154582 * Hotcrystal0 * (+93)
23:25:00 <esolangs> [[User:I am islptng/List of the users that is also in conwaylife.com]] https://esolangs.org/w/index.php?diff=154584&oldid=154489 * Hotcrystal0 * (+51)
23:25:12 <esolangs> [[User:I am islptng/List of the users that is also in conwaylife.com]] https://esolangs.org/w/index.php?diff=154585&oldid=154584 * Hotcrystal0 * (+3)
23:30:20 <esolangs> [[AH'TALIQUAE ENGLISH/Examples]] M https://esolangs.org/w/index.php?diff=154586&oldid=141133 * Buckets * (+0)
←2025-03-25 2025-03-26 2025-03-27→ ↑2025 ↑all