←2025-11-11 2025-11-12 2025-11-13→ ↑2025 ↑all
00:22:32 -!- DOS_User_webchat has joined.
00:22:45 -!- DOS_User_webchat has changed hostmask to ~DOS_User_@user/DOS-User:11249.
00:46:12 -!- DOS_User_webchat has quit (Remote host closed the connection).
00:54:26 -!- ais523 has quit (Quit: quit).
01:13:52 <esolangs> [[ASTLang]] https://esolangs.org/w/index.php?diff=167959&oldid=167957 * NTMDev * (+909)
01:36:44 <Sgeo> There must exist a point at which if a Brainfuck implementation has too few memory cells, or the Brainfuck code cannot exceed a certain length, the existence of the implementation no longer proves that it's... whatever the memory bounded version of TC is. What is that point?
01:38:03 <int-e> Sgeo: Sure, if you follow along these lines you may come to see a computer as a giant but finite state machine.
01:38:35 <int-e> But TC is a much nicer theoretical concept than "fit for programming" which is what we care about in practice. Not necessarily for esolangs though.
01:39:02 <int-e> Sgeo: The original Brainfuck had 30k 8-bit cells so it's not TC either.
01:40:11 <Sgeo> I keep thinking about "compiling" Brainfuck to Burroughs E101. E101 only has 100 cells. And I don't see an easy way to... decrement a pointer in few instructions unless I limit to 10 cells. And there's only room for 8*16=128 instructions in E101's ... program
01:40:40 <int-e> Sgeo: Basically no actual implementation of any programming language is TC. (You could cheat by adding more storage peripherals over time, though eventually you'll run into physical limitations doing even that.)
01:41:35 <Sgeo> Also E101 doesn't do constants. So I'd also be "compiling" into instructions for the user: At the first halt type this number, at the second halt type this number, etc..
01:41:47 <Sgeo> Which is how real programs on it worked too
01:41:49 <int-e> Sgeo: Right, that may be too small for a brainfuck target. Or, really, for basically anything you want a computer to do. :)
01:41:53 <int-e> (these days)
01:42:46 <int-e> it really sounds more like an electronic calculator that has some basic macro capabilty for user-defined functions
01:43:31 <Sgeo> Burroughs was a calculator company. The keyboard looks like an adder machine. But I think it is in some sense a full computer (not a stored-program one though)
01:43:45 <Sgeo> https://bitsavers.org/pdf/burroughs/Electrodata/E101/
01:44:13 <int-e> yeah sorry I'm not going to be drawn into details :P
01:45:52 <int-e> Sgeo: Can you sort an array of 10 cells without human intervention? What about 50 cells or all 100? :)
01:45:52 <Sgeo> There's an extension that does allow setting E and F from the accumulator, I guess that can be used if I store a pointer in memory
01:46:45 <int-e> (sorting is one of the basic tasks that we often want a computer to do :) )
01:47:54 <Sgeo> I don't think I'm going to attempt to write a sort routine. Might not be able to do all 100 if it needs more scratch space than just the accumulator. There's a library of subroutines, all of which are math functions.
01:47:56 <int-e> I don't need an answer to the question. I'm musing about where I'd draw the line between a calculator and a computer.
01:48:56 <int-e> And I think if it *can't* sort then it's not a computer. If it can sort... well, that clearly moves it closer. Is it enough for me? I don't know! Obviously modern computers do so much more.
01:51:52 <int-e> (Interesting side show: Are analog computers for space crafts computers :) )
01:54:43 <Sgeo> Is there a sort where the only time I have to decrement the pointer is going back to the beginning of the array? I think that would be -easier- to implement
01:54:52 <Sgeo> (I'm not actually attempting to implement)
01:55:17 <Sgeo> I was thinking bubble sort but that doesn't quite fit
02:03:10 <int-e> Sgeo: uh, there is this horribly inefficient thing: https://paste.debian.net/1405805/
02:03:47 <int-e> (2**64 is supposed to be a number larger than any number of interest)
02:05:22 <Sgeo> Yeah I think that's implementable
02:05:38 <int-e> "horribly inefficient" -- it's exponential
02:06:06 <int-e> not quite as bad as the expected runtime of bogosort but close
02:09:47 <int-e> Well, IIRC it had O(2^n) worst case, O(2^n / n) average case and only O(n^2) worst case.
02:10:20 <int-e> Assuming that the numbers to be sorted are all distinct
02:11:10 <int-e> err the second "worst case" was supposed to be "best case"
02:25:31 <sorear> the usual approach is to define a generalization of the machine where the memory size is a parameter, then prove that the problem is PSPACE-complete
02:25:36 <sorear> like nxn chess
02:30:17 <esolangs> [[ASTLang]] https://esolangs.org/w/index.php?diff=167960&oldid=167959 * NTMDev * (+73)
02:30:54 -!- amby has quit (Quit: so long suckers! i rev up my motorcylce and create a huge cloud of smoke. when the cloud dissipates im lying completely dead on the pavement).
02:32:19 <esolangs> [[ASTLang]] https://esolangs.org/w/index.php?diff=167961&oldid=167960 * NTMDev * (+379)
02:35:08 <esolangs> [[ASTLang]] https://esolangs.org/w/index.php?diff=167962&oldid=167961 * NTMDev * (+201)
02:37:32 <esolangs> [[User:NTMDev]] https://esolangs.org/w/index.php?diff=167963&oldid=167818 * NTMDev * (+14)
03:01:29 <Sgeo> There's an optional addition that allows for 220 cells. Said addition also makes it easier to load a cell as a pointer in, um. I think 5 instructions.
03:01:52 <Sgeo> Which seems like a lot
04:00:39 <esolangs> [[Talk:INTERCAL]] https://esolangs.org/w/index.php?diff=167964&oldid=167847 * Jasper * (+209) /* Useful functions made using INTERCAL's unary / binary operators */
04:18:39 <sorear> if you write a bubble sort pass to write back the output 1 element later than it was read, you don't need to decrement anything. if you make the array circular, the data will be in the correct places after a number of iterations equal to the array size, which is exactly how many you need for the array to be sorted. I have 25 instructions https://paste.debian.net/1405811/
04:19:46 <sorear> apart from the harvard architecture and the coroutines (!) it seems like a fairly standard one-address machine
04:20:47 <sorear> the first 94 should be a 95, one-way counter to know when the sort is done
04:44:14 <Sgeo> !!!
04:44:22 <Sgeo> I should... maybe look at writing an emulator soon, huh.
04:57:13 <Sgeo> sorear, awesome! Though I'm not sure what you mean by the bracketed numbers, they can't be pinboard numbers
05:14:09 <Sgeo> Oh I see. Instruction numbers, and the second [0] is pinboard ... um. Did you index pinboards by 0? They go 1-8, with 0 meaning same pinboard.
06:15:55 -!- pool has quit (Ping timeout: 256 seconds).
06:16:15 -!- pool has joined.
06:43:24 <Sgeo> floor(log2(999999999999))+1 ... I think that's correct for how many bits needed to hold a 12 decimal digit number, right?
06:50:40 -!- chiselfuse has quit (Remote host closed the connection).
06:50:58 -!- chiselfuse has joined.
07:06:49 <esolangs> [[$]] M https://esolangs.org/w/index.php?diff=167965&oldid=144365 * ThrowAwayLurker * (+90) Added easily inferable detail that still deserves explicitness; also semicolon because I'm a nerd and it makes it flow better
07:10:03 <esolangs> [[Backsharp]] M https://esolangs.org/w/index.php?diff=167966&oldid=90825 * ThrowAwayLurker * (+4) Added dash
07:11:28 <esolangs> [[SQ]] M https://esolangs.org/w/index.php?diff=167967&oldid=80607 * ThrowAwayLurker * (-9) User page links should be clearly user page links, thus the "user:" should be included
07:27:14 <esolangs> [[Action symbol]] https://esolangs.org/w/index.php?diff=167968&oldid=167579 * Yayimhere2(school) * (-97) /* how it works */
07:31:53 <esolangs> [[Tttt]] https://esolangs.org/w/index.php?diff=167969&oldid=91798 * ThrowAwayLurker * (+505) Made it clearer and more professional/serviceable
07:35:44 <esolangs> [[Talk:Chimera]] N https://esolangs.org/w/index.php?oldid=167970 * ThrowAwayLurker * (+216) Created page with "Isn't the language trivially Turing-complete as it allows for the computation of arbitrary lambda functions? ~~~~"
07:39:10 -!- somelauw has joined.
07:48:18 <Sgeo> *self.memory.get(tens, ones)? = self.a;
07:48:28 <Sgeo> It feels weird to have a question mark on the left of the assignment like that
07:53:38 -!- tromp has joined.
07:54:16 <b_jonas> Sgeo: if you can use two pointers then you can do a bubble sort with pointers that you can only decrement by resetting
07:54:35 <b_jonas> or does this have like a tape/drum memory with just one pointer because there's one head?
07:55:04 <b_jonas> I think even with one head you can do a bubblesort, you just keep shifting the array forward in every iteration
07:55:29 <Sgeo> The "pointer" is two switches that can be increased up. sorear wrote a sort that does the shifting array forward thing
07:56:15 <sorear> Sgeo: don't forget the sign bit
07:56:18 <Sgeo> I believe it is drum memory but that's not what the issue is. In the 220 memory model there is an instruction that can set the switches according to values in the accumulator
07:57:04 <Sgeo> sorear, thank you. 32 bits isn't enough anyway so rounding to 64 which is plenty
07:57:44 -!- somelauw has quit (Remote host closed the connection).
07:59:49 <sorear> it's almost but not quite a decimal64
08:00:49 <Sgeo> I'm sort of treating them as integers and assuming the implicit decimal point only matters during multiplication and division
08:01:50 <Sgeo> I should probably sleep soon
08:02:41 <Sgeo> I want a closer picture of the keyboard so I can understand X and Y better. Do they... default to 0? Or alarm if used and not set?
08:05:49 -!- svm has joined.
08:08:33 -!- msv has quit (Ping timeout: 265 seconds).
08:36:01 -!- Sgeo has quit (Read error: Connection reset by peer).
09:05:59 <esolangs> [[Action symbol]] https://esolangs.org/w/index.php?diff=167971&oldid=167968 * Yayimhere2(school) * (+112) /* how it works */
09:18:55 <esolangs> [[Action symbol]] https://esolangs.org/w/index.php?diff=167972&oldid=167971 * Yayimhere2(school) * (+517) /* examples */
09:21:31 <esolangs> [[Action symbol]] https://esolangs.org/w/index.php?diff=167973&oldid=167972 * Yayimhere2(school) * (+133) /* Computational class */
09:26:32 <esolangs> [[Talk:Esolang Quality Rating System]] https://esolangs.org/w/index.php?diff=167974&oldid=106292 * Yayimhere2(school) * (+540)
09:31:11 <esolangs> [[Do you remember me?]] https://esolangs.org/w/index.php?diff=167975&oldid=167862 * Yayimhere2(school) * (-1)
09:36:35 <esolangs> [[Do you remember me?]] https://esolangs.org/w/index.php?diff=167976&oldid=167975 * Yayimhere2(school) * (+27) /* Examples */ Added a see also section, with Also?
09:38:37 <esolangs> [[Do you remember me?]] https://esolangs.org/w/index.php?diff=167977&oldid=167976 * Yayimhere2(school) * (+42) /* Semantics */
09:40:53 <esolangs> [[Do you remember me?]] https://esolangs.org/w/index.php?diff=167978&oldid=167977 * Yayimhere2(school) * (+41) /* Semantics */
09:44:04 <esolangs> [[Do you remember me?]] https://esolangs.org/w/index.php?diff=167979&oldid=167978 * Yayimhere2(school) * (+114) /* Semantics */
09:51:22 -!- tromp has quit (Quit: My iMac has gone to sleep. ZZZzzz…).
10:39:04 <esolangs> [[One-Instruction Cyclic Tag]] https://esolangs.org/w/index.php?diff=167980&oldid=154954 * Yayimhere2(school) * (-52) /* Instruction TAG */ The placement of the <code> tags is weird, so I "fixed" it.
11:01:55 <esolangs> [[Talk:Stroke]] https://esolangs.org/w/index.php?diff=167981&oldid=129057 * Yayimhere2(school) * (+1068)
11:20:53 <esolangs> [[Mascarpone]] https://esolangs.org/w/index.php?diff=167982&oldid=123948 * Yayimhere2(school) * (+2) /* Stacks */ there was a missing "a" to my knowledge
11:47:05 <esolangs> [[InterWater]] https://esolangs.org/w/index.php?diff=167983&oldid=9082 * Tpaefawzen * (+47) +au +cat
11:47:58 <esolangs> [[VENIAL]] https://esolangs.org/w/index.php?diff=167984&oldid=9049 * Tpaefawzen * (+19) +catyr
12:19:16 -!- svm has changed nick to msv.
12:27:24 <APic> Hi
12:33:24 -!- ais523 has joined.
12:44:11 <ais523> <int-e> And I think if it *can't* sort then it's not a computer. ← what if it's able to produce tables of log, sin, cos? (IIRC this was Babbage's original motivation for inventing the computer)
12:47:26 <int-e> ais523: still just a calculator to me
12:47:37 <int-e> none of this is objective!
12:48:10 <int-e> I mean my idea of a calculator is shaped by a pocket calculator that I used in school, which had trignometric and hyperbolic functions
12:48:21 <int-e> so it can't be that hard, right ;)
12:48:33 <esolangs> [[Talk:Esolang Quality Rating System]] https://esolangs.org/w/index.php?diff=167985&oldid=167974 * Ais523 * (+693) /* Computational Class points revamp */ thoughts
12:52:59 <fizzie> Pocket calculators with trigonometric functions are a sin.
12:54:39 <int-e> it's for a good cos
12:55:51 <fizzie> Well, in the burning pit you'll have time to work on your tan. (Okay, that's a stretch.)
12:56:32 <sorear> traditional card sorters are much simpler than anything that can do math
12:57:17 <int-e> necessary, not sufficient
12:58:19 <ais523> when I was a child I made a mechanical card sorter (in which the numbers to sort by were written in binary using holes and slots along the edge of the card
12:59:11 <ais523> although it was mostly manual, you had to move the components into the right place by hand but that was only O(n log n) effort
12:59:26 <ais523> (you have to do it log n times, and push the pin through n cards)
12:59:32 <int-e> radix sort is still cool
12:59:50 <ais523> yep, the algorithm is basically binary radix sort
13:00:15 <ais523> although the non-recursive version (sort by lsb, then sort by second-lsb, etc.)
13:00:34 <ais523> hmm, I just realised that radix sort is stable
13:02:40 <fizzie> It always amuses me when reading the Lensman books when they have faster-than-light inertialess flight, antimatter ("negaspheres") and mind powers, but then rely on punch card databases to sort through people, and mechanical calculators to solve space ship navigation. Here's some choice quotes: https://0x0.st/KpK_.txt
13:03:48 <fizzie> Imagine solving "course-and-distance problems" at a rate of ten per minute! That's progress.
13:03:56 <ais523> at least in computer games, mixing technology from a large number of time periods, including the plausible future and the implausible future, often leads to a good game even though it doesn't make sense as a setting
13:04:00 <ais523> so it wouldn't surprise me if fiction can do the same
13:07:38 <fizzie> Science did march on a little in the course of the series, it must be said. Those quotes were from a story originally published in 1939; it and the other books near it also consistently used the word "computer" to mean a person who makes computations. But in a 1947 sequel there's a reference to "the big computer at Ultra Prime" where context makes it clear it's an (electronic?) device of some
13:07:40 <fizzie> sort.
13:12:49 <ais523> thinking about it, technology that is quick to invent doesn't necessarily correlate with technology that people imagine would be useful
13:13:08 <ais523> a good example of a science-fiction technology that actually did turn out to be possible, and get invented, is television
13:13:45 <ais523> even before it was invented, it was quite common for science-fiction stories to have video-based communication devices and I think the word "television" had already been coined before the invention itself was created
13:16:19 <int-e> tele-vision is an old magic trope too, add film and telegraph and it's not that much of a stretch?
13:22:25 <esolangs> [[User:RaiseAfloppaFan3925]] https://esolangs.org/w/index.php?diff=167986&oldid=167659 * RaiseAfloppaFan3925 * (+983) Added Esolang Quality Rating System ratings
13:23:13 <int-e> fizzie: hmm isn't there some science fiction that has space travel and then on board communication uses pneumatic tubes :)
13:24:30 -!- amby has joined.
13:48:40 -!- perlbot has quit (Quit: ZNC 1.9.1+deb2+b3 - https://znc.in).
13:48:40 -!- simcop2387 has quit (Quit: ZNC 1.9.1+deb2+b3 - https://znc.in).
13:59:13 <esolangs> [[Thueue]] N https://esolangs.org/w/index.php?oldid=167987 * Yayimhere2(school) * (+1517) Created page with "'''Thueue''' is a [[dequeue]] based [[Thue]] style esolang. It can too do replacements, however only at the start and end of the string, and only on individual elements. == Semantics == Thueue uses the following syntax: ''D'' 1 [''x'']$ [''y'']$ < 2 [''x'']
13:59:34 <esolangs> [[Thueue]] https://esolangs.org/w/index.php?diff=167988&oldid=167987 * Yayimhere2(school) * (-2)
14:02:38 <esolangs> [[Thueue]] https://esolangs.org/w/index.php?diff=167989&oldid=167988 * Yayimhere2(school) * (+23) /* Semantics */
14:03:56 <esolangs> [[Thueue]] https://esolangs.org/w/index.php?diff=167990&oldid=167989 * Yayimhere2(school) * (-23) /* Semantics */
14:09:33 <esolangs> [[Talk:Bitwise Cyclic Tag]] https://esolangs.org/w/index.php?diff=167991&oldid=122871 * Yayimhere2(school) * (+156) /* Turing-completeness of BCT without arbitrary memory string as input? */
14:14:02 <esolangs> [[User:Yayimhere]] https://esolangs.org/w/index.php?diff=167992&oldid=167870 * Yayimhere2(school) * (+13) /* esolangs */
14:14:41 <esolangs> [[Thueue]] https://esolangs.org/w/index.php?diff=167993&oldid=167990 * Yayimhere2(school) * (+0) /* Semantics */
14:15:28 -!- pool has quit (Read error: Connection reset by peer).
14:15:49 -!- pool has joined.
14:18:15 <esolangs> [[Talk:Bitwise Cyclic Tag]] https://esolangs.org/w/index.php?diff=167994&oldid=167991 * Ais523 * (+306) /* Turing-completeness of BCT without arbitrary memory string as input? */ r to Yayimhere: you can just pad out the productions to ignore the extra digits
14:20:44 -!- pr1sm has joined.
14:40:34 -!- chiselfuse has quit (Remote host closed the connection).
14:40:38 <fizzie> I feel like one Lensman book had a speaking tube (not pneumatic, just your regular old tube), but I couldn't find it easily.
14:40:50 -!- chiselfuse has joined.
14:41:08 <fizzie> They talk a lot about tubes because there's also multiple plot-relevant "hyperspatial tubes" (read: wormholes).
14:41:46 <ais523> fizzie: well if you think about the world before the telephone is invented, it's unclear whether working electronic sound transmission or hyperspatial wormholes will be invented first
14:41:56 <ais523> you can probably imagine both of them and have no idea how to implement either
14:43:30 <fizzie> Fair, though this isn't a pre-telephone story. They do have ("ultra-wave") radios for at least ship-to-ship communication.
14:46:10 <esolangs> [[Esolang:Introduce yourself]] https://esolangs.org/w/index.php?diff=167995&oldid=167939 * Python yyds * (+143)
14:46:27 <esolangs> [[User:Python yyds]] N https://esolangs.org/w/index.php?oldid=167996 * Python yyds * (+212) Created page with "==Introductions== Python_yyds Hi world,I'm Python_yyds! I think esolang is interesting so I join it.~~~~"
14:47:02 <esolangs> [[CContains]] https://esolangs.org/w/index.php?diff=167997&oldid=150307 * Yayimhere2(school) * (-20) Changed an incorrect title into a lowercase
14:47:08 <fizzie> The hero "donned his headset" to ask for reports from different parts of the ship, so I think they're using something telephone-like also for intra-ship comms.
14:48:40 <esolangs> [[User:Python yyds]] https://esolangs.org/w/index.php?diff=167998&oldid=167996 * Python yyds * (+69) /* Introductions */
14:48:55 -!- amby has quit (Ping timeout: 240 seconds).
14:49:53 <int-e> . o O ( we'll sooner have flying cars than ubiquitous transcontinental real-time communication )
14:50:34 <ais523> int-e: ooh, that's a tough one
14:52:02 -!- amby has joined.
15:29:10 <avih> korvo: did you write https://esolangs.org/wiki/Algebraic_Brainfuck ? (i followed it from your RPython code)
15:41:46 <avih> korvo: if yes, then i've reached "As a monoid", which i didn't yet read, but i already have two questions: 1. {scalemove,move}2 should probably be a specific subset of a more generic thing with N offsets and M values. I don't know whether that's a limitation of the listing syntax. and 2. i don't get omega (what it's good for). i can tell it's either no-op or an infinite loop, depending on whether k is 0 or not, respectively, but not why it's on this list.
15:43:22 <avih> (if that's explained later, just let me know. i'll get later eventually)
15:45:39 <esolangs> [[Special:Log/newusers]] create * Cyclic Tag * New user account
15:46:33 <avih> or maybe omega is on that list because otherwise there are some bf programs which cannot be represented with only the other items on that list? like an observable side effect that the program never stops?
15:46:52 <esolangs> [[Esolang:Introduce yourself]] https://esolangs.org/w/index.php?diff=167999&oldid=167995 * Cyclic Tag * (+301)
15:47:12 <int-e> avih: it's a nonterminating piece of code, and the name is probably inspired by Combinatory Logic
15:47:35 <int-e> you could have rules like omega(k) x = omega(k) but nothing like this appears to be done there
15:47:44 <int-e> (for k != 0)
15:47:55 <avih> int-e: it's not necessarily non terminating, but it can be, yes. however that's not what i asked.
15:48:09 <avih> i asked why it's on this list.
15:48:52 <int-e> it's an absorbing element for a ton of stuff in the monoid
15:49:00 <int-e> but the page doesn't do anything with that (yet?)
15:49:29 <avih> ok, then i'm again bumping into my monoid no-knowledge issue. thanks.
15:51:31 <avih> int-e: and do you know why {scalemove,move}2 exist explicitly on that list, and not implicitly as a subset of a more generic N pairs of (offset,value)?
15:51:59 <avih> is that a limitation of the syntax used there?
15:53:21 -!- DOS_User_webchat has joined.
15:53:28 -!- DOS_User_webchat has changed hostmask to ~DOS_User_@user/DOS-User:11249.
15:54:06 <avih> or is it because scalemove,move and scalemove2,move2 together are the building blocks which allow to express any set of N offset,value pairs?
15:55:15 <int-e> I don't know. I guess the syntax is limited to a fixed number of arguments and http://www.cs.tufts.edu/~couch/bfmacro/bfmacro/ stops at move2()
15:55:55 <avih> (but then move is a subset of scalemove, so it could be removed too... i don't think i get what this list is supposed to represent, other than some list of possible macro definitions, not necessarily a minimal one)
15:56:11 <int-e> anyway, yeah, if you want the author's thoughts you'll have to wait for korvo to reply
15:56:32 <avih> k, thx
15:57:23 <int-e> (or, if idling on IRC isn't your thing, try the talk page(s))
15:58:06 <avih> the bouncer works for me ;)
15:58:27 <avih> it's not like i'm sitting and waiting for a reply....
16:00:15 -!- DOS_User_webchat has quit (Ping timeout: 250 seconds).
16:01:25 -!- DOS_User_webchat has joined.
16:01:29 -!- DOS_User_webchat has changed hostmask to ~DOS_User_@user/DOS-User:11249.
16:03:52 <avih> korvo: and somewhat regardless, if your code (which i didn't try to follow yet) merely represents the program in some immediate translation to this or a similar form, and the performance comes from being executed efficiently? or does it also include some additional (post?)processing steps (optimizations) which aim to simplify the expression into fewer expressions, which translates immediately to shorter execution time?
16:04:08 <avih> s/if your code/is your code/
16:05:00 <avih> (like identifying redundancies and removing them from the expressions)
16:08:50 -!- DOS_User_webchat has quit (Remote host closed the connection).
16:15:07 <avih> and finally, assuming it does apply known equivalence rules (your program), does it end up in some optimal list? or is it only optimal for some subset of knowledge which can theoretically exist about a bf program?
16:17:57 <avih> fwiw, while my interpreter is reasonably quick in some cases, i want to remove any specific "macros", and base it purely on observable side effects, hopefully ending up in some optimal execution list (wishful thinking).
16:18:18 <ais523> I can answer this one, algebraically optimising BF will not necessarily reach a theoretically optimal point because optimally optimising a program is undecidable
16:18:37 <int-e> avih: anyway, "monoid" here is just a fancy term to capture the concatenation operation of programs with basic properites: there's a unit e (the empty program) with ex = xe = x for all programs x, and concatenation is associative, (xy)z = x(yz)
16:18:56 <ais523> like, you can imagine a program that searches for counterexamples to the Goldbach conjecture, that probably optimises into an infinite loop but good luck proving it
16:19:02 <avih> int-e: yeah, i got that. i just read the monoid subsection.
16:20:15 <avih> ais523: not familiar with the specifics, but yeah, i think i get the point, at least roughly.
16:20:19 <int-e> the definitions on the page seem to have mixed purposes... some are useful for translating into brainfuck; some others are useful for analyzing, optimizing, standardizing, and maybe compiling brainfuck
16:21:07 <avih> int-e: my point exactly. i don't get what it's supposed to represent other than some set of macros.
16:21:24 <esolangs> [[Special:Log/newusers]] create * Losbos * New user account
16:23:58 <avih> i hoped that it would represent some minimal set which is able to represent any bf program, and which is easy to optimize. but that minimal set has to be plus(int),right(int),loop(code),input,output
16:25:54 <avih> (there are likely other sets too, but this is the most obvious)
16:27:44 -!- Lord_of_Life_ has joined.
16:28:29 -!- Lord_of_Life has quit (Ping timeout: 256 seconds).
16:30:34 -!- Lord_of_Life_ has changed nick to Lord_of_Life.
16:31:17 <avih> ais523: re not possible in general, i think it should be possible to reach have some theoretical optimal result under some known constraints for the compiler, such as complexity of processing the input before starting execution. does that feel about right?
16:32:20 <ais523> avih: hmm, that's interesting, "best available optimisations given a given computational class for the compiler"
16:32:38 <ais523> but I think it's still undecidable, any valid optimisation could be special-cased so you can do it in O(1)
16:32:49 <avih> like, if you say the compiler is only allowed to run in O(N), then there must be some theoretical result of the most it can do with the program. same for O(N log N), etc
16:33:22 <ais523> some real compilers do have some special cases like that, e.g. clang recognises a common way to write the count-set-bits function and optimises it into the processor intrinsic
16:33:49 <ais523> it'd take a lot of effort to prove that optimisation correct, but clang doesn't have to because it just matches the pattern directly, so it doesn't slow the compiler down at all
16:34:00 <ais523> (at least in a computational-complexity sense)
16:36:15 <avih> sure. pattern matching is one way to do that, but for that you need to collect a list of patterns. my approach is to ignore patterns, or at least only consider known basic equivalences, and on top of that build the execution list based on observable side effects
16:37:19 <ais523> avih: it would be hard to come up with a theorem about this because it is not well-defined
16:37:39 <avih> what's not well defined? observable side effects?
16:37:53 <ais523> "known basic equivalences"
16:38:08 <avih> indeed. i hoped that would not be the case
16:38:10 <avih> :)
16:40:21 <avih> currently my known equivalence is that a loop which effectively doesn't moves the head and modifies the head data by 1 is a counter.
16:40:46 <ais523> oh, no, that one isn't actually true
16:40:53 <ais523> because the head cell can be moved in an inner loop
16:40:55 <ais523> * modified
16:41:05 <ais523> I guess it's true if you don't allow inner loops to modify the looped-on cell either
16:41:13 <ais523> but, it won't necessarily be true that every iteration will do the same thing
16:41:57 <avih> that's included in "doesn't move the head". obviously it's recursive for internal loops too. if an internal loop moves the head by unknown amount then it cannot be considered "not moving the head"
16:43:04 <ais523> avih: no, I mean things like [->[-<+>]<]
16:43:19 <avih> it comes down to how much can you know about the state, and that's limited by the constraints you put on the compiler in terms of computational class
16:44:00 <avih> sec, let me get what it does
16:44:37 <ais523> (admittedly this specific example isn't useful, more complicated versions can be TC though)
16:46:09 <avih> ais523: it cannot be said that it changes the counter by 1, at least not in some way which i can tell trivially
16:46:48 <ais523> avih: right – the structure is "counter loop containing a balanced loop" but the inner balanced loop overwrites the counter
16:46:55 <avih> it has some interaction of the outer head value with the inner loop. if you could conclude that it's 0 (which you can't), then it would have been a counter.
16:48:54 <esolangs> [[One-Instruction Cyclic Tag]] M https://esolangs.org/w/index.php?diff=168000&oldid=167980 * Aadenboy * (-12)
16:49:32 <avih> anyway, the idea is to collect knowledge as the program is processed, and maintain some known state at any point in time (this might include actual known values, but also knowledge that some cells have unknown values), and based the execution list creation on top of that
16:50:40 <avih> and a loop which ends up moving head by an unknown amount means we now know nothing, and throw away all our existing knowledge.
16:53:00 <avih> (but sometimes we can know that amount, for instance if we know the loop moves the head left by 1 on each iteration (until head cell has value 0), and we also know that the previous cell has value 0, at which case we don't have to drop our knowledge.
16:54:55 <avih> (and we also know it's not actually a loop, but at most an "if")
16:56:46 <avih> the interesting parts are 1. how to represent the knowledge and with which limitations. 2. how to produce the execution list using this knowledge. and i already have ideas for both.
16:59:58 <avih> in a nutshell, the knowledge is respectively, the last known instruction per cell (which we may or may not have), and producing the list means we have to "flush" the knowledge as instructions as soon as we have to lose the knowledge about a cell - and if some other cell depends on the current value of this cell. example of when we have to drop the knowledge about cell X is when there's input into cell X.
17:02:12 <avih> so before we store this new knowledge (that X is now unknown input), we have to "flush" (produce instructions which sets the cell with our existing knowledge - and everything else which depends on the value of this cell).
17:03:25 <avih> and if each instruction can depend only on one other cell (and some constants), then it _should_ be a relatively simple graph.
17:04:10 <avih> hopefully maintainable in no more than O(N log N), but not sure about that yet.
17:06:51 <korvo> avih: To answer your biggest question: the "scale" and "move" terminology comes from bfmacro. The algebraic rules for those ops come from common lore; I collected them from various Brainfuck compilers and interpreters.
17:06:53 <avih> an example of what such framework should be able to produce, for instance, is to translate the classic hello world program into a series of putchar instructions (each with an immediate value)
17:07:52 <avih> korvo: IOW, they're some patterns which bf interpreters apply?
17:08:36 <korvo> avih: Also, yes, there is an optimal TC fragment of the monoid given by loops over pointer-propagating terms. I don't know what its rank is; for a [[monoid]], its *rank* is the minimum number of letters in any equivalent syntactic monoid. (Intuitively: the smallest alphabet that still faithfully represents every program.)
17:08:46 <avih> "TC" ?
17:08:54 <korvo> avih: Yeah. That's not an original page at all; it's 100% collected lore.
17:09:08 <avih> gotcha. what's "TC"?
17:09:34 <korvo> Oh, sorry. Turing-completeness. For a monoid, Turing-completeness usually means that it's not possible to decide whether an arbitrary term is in normal form.
17:09:48 <avih> right
17:09:57 <avih> (why didn't i think of that?!)
17:10:01 <avih> (TC)
17:10:21 <korvo> (It's okay, I'm prone to overusing jargon. I don't mind being asked to rephrase or simplify.)
17:10:54 <avih> it's ok, i don't mind asking if i don't get something, or at least declaring that :)
17:11:26 <korvo> avih: Have you encountered the concept of *abstract interpretation* yet? It fits cleanly into the monoidal way of doing things and it formalizes your idea about having partial knowledge of the current runtime state.
17:11:44 <avih> no
17:12:03 <avih> but it makes sense that i'm not the first to think about such model :)
17:12:55 <korvo> Your idea makes plenty of sense. The idea is that a concrete interpreter computes *exactly* the right answer, but could spend unbounded time doing that; an abstract interpreter only *approximates* the answer, but never visits any part of a program more than a fixed number of times (often just one visit!)
17:13:43 <avih> ideally just once, but i don't know whether that's possible.
17:14:12 <avih> or rather, whether O(N) is enough for whatever definition of ideal.
17:14:14 <korvo> My bf.py RPython interpreter uses abstract interpretation to do the optimization. It takes input expressions and splits them up into primitive pieces of the monoid, and then reassembles them step-by-step. TBH this is *not* the only way to do it, but it is very neat because part of the approximated value is the optimized program!
17:15:43 <avih> it does sound very neat. mind showing me the list of instructions it produces, if at all, for this? https://0.vern.cc/b.txt
17:16:12 <korvo> Oh, there will always be programs that you can't idealize. Suppose we write out the Goldbach conjecture as a program: for every even number, look for a pair of primes that sum to it; if there's no such pair, halt. The optimal representation of that program is either something like +[] or the empty program, depending on whether the conjecture is true.
17:16:42 <avih> is your execution based on a list of instructions of some sort? like some IR of some machine model? or is the execution more implicit evaluation of something?
17:17:48 <avih> (2nd time "Goldbach conjecture" is mentioned. i really should check out what that is)
17:17:50 <korvo> https://bpa.st/GTNQ
17:18:43 <avih> hmm.. so basically it "simplifies" the bf code into other bf code, and then executes it bf instruction after the other?
17:19:01 <korvo> The abstract execution's based on the original monoid at the [[Algebraic Brainfuck]] page. I might switch over to the pointer-propagation monoid but that likely will require changing the entire frontend of the interpreter up to the parser.
17:19:41 <korvo> The concrete execution's based on chaining basic blocks of actions, where each action has access to the tape and pointer, can do I/O, and must return an updated pointer. I picked this representation because it's JIT-friendly.
17:21:02 <avih> i see. but it still follows the original flow of the code. for instance it _can_ know, if it tried, in O(N) probably, that it produces a fixed list of output values
17:22:12 <korvo> Oh! Yeah, the abstract interpreter doesn't currently simulate knowledge about the tape. There's a reason for that; algebraically, Brainfuck doesn't know what the tape's topology is like, so behavior which depends on that topology isn't safe to optimize around.
17:23:23 <korvo> *That* is because this codebase has a responsibility to generate *canonicalized* versions of programs; it doesn't try to outsmart the program author. This isn't like GCC or LLVM, which have lots of hacks in order to catch bad code written by users.
17:23:35 <avih> in my model, where the only pattern it cares about is a counter loop, it should be able to know all the cell values before they're being printed, and therefore create a list of putchar statements of immediate values, where the cell values themselves has nothing which depends on them, and it's not an observable side effect, and therefore should disappear from the execution list completely
17:24:23 <avih> it does know the initial state.
17:24:32 <korvo> Is unrolling the loop going to shrink the overall size of the code? It's undecidable, actually! In your case, the loop is only representing an initialized tape, but if the loop does any I/O then it wouldn't be possible to examine.
17:24:41 <avih> that's enough to deduce the whole output in this specific case in one pass
17:25:25 <avih> it won't necessarily shrink the code size, but almost certainly reduce execution duration
17:26:08 <korvo> Sure. So, here's where I actually use bf.py: https://bbgauge.info/brainfuck.html These programs likely reduce to either +[] or [] depending on whether their underlying mathematical statements are true or false. We *don't know* the actual answers, but we can imagine how *hard* it would be to solve the problem by looking at how big the programs are.
17:26:15 <avih> especially if loops end up running more than once
17:27:50 <ais523> korvo: tangentially related to the BB gauge (although not directly relevant), so you might be interested in it: https://codegolf.stackexchange.com/questions/97004/does-the-code-terminate-nobody-knows
17:27:58 <avih> i think this entire statement is based on me grokking that page you linked?
17:28:58 <korvo> ais523: Yeah! I think I've got most of those documented here: https://bbgauge.info/problems.html
17:30:02 <korvo> avih: Probably. But the short version's not hard to appreciate: when we don't know how hard something is, but we do know what the associated computation looks like, then we can compare the computations instead of the original thing. The computations are like representations of the thing.
17:30:37 <avih> korvo: i do find it fascinating that your program is only second to Laurent's, because his code definitely unrolls multiplication loops, and then jit it. obviously i believe you, but i don't see how it can be...
17:31:46 <avih> korvo: yeah, i do roughly understand it and think so too.
17:31:47 <korvo> avih: RPython has a builtin unroller; every trace is unrolled and gets a prologue, because the traces can only be started at the top of while-loops.
17:33:08 <avih> interesting. so basically you say you merely crunch the bf program slightly, and then let RPython grok and optimize it?
17:33:49 <avih> akin to produce some c code from bf program, naively or near enough, and then let gcc do what it does?
17:34:51 <avih> fwiw, while gcc is very good at optimizing such naive translation to c, it's meaningfully slower than a translation which unrolls multiplication loops.
17:35:21 <korvo> Yes. But without using libgccjit or libllvm. Here's an example JIT trace from running mandel.b: https://bpa.st/VSJA
17:35:21 <avih> i.e. compiling a native translation vs compiling a translation which unrolls mult loops
17:36:16 <avih> sure. hence "akin" :)
17:36:21 <korvo> Those merge points used to hold BF code; they would just print out the code like ">" or "+". But the IR no longer admits a nice pretty-printed format quickly, and jit_debug strings need to be constant for speed.
17:37:00 <avih> so where is this optimizer? NIH RPython stuff?
17:37:15 <korvo> Otherwise it's just SSA. Crucially, it's not *my* SSA; it's RPython's generated SSA that it built *while compiling* my interpreter. The SSA is roughly similar to what you'd get when debugging PyPy, and you can see that I'm really just using PyPy's environment variable to control the JIT.
17:37:39 <avih> (or s/NIH/own/ ;) )
17:37:52 <avih> "SSA"?
17:38:43 <korvo> https://rpython.readthedocs.io/en/latest/jit/optimizer.html is probably what you want to read. It's a very simple optimizer: one forward pass over an SSA IR, doing the standard Frances Allen optimizations like propagation and dead-code removal. Every trace is taken over *two* iterations of a loop, and that allows partial unrolling and a prologue just by comparing the iterations to see what changed.
17:39:00 <korvo> Single Static Assignment. A property of IRs where a name is only assigned once, and never mutated.
17:39:02 <avih> my question was where, not how ;)
17:39:20 <avih> but i get it, rpython has an optimizer of its own.
17:40:06 <korvo> Sorry, "where" seems loaded. Like, I know where to find https://github.com/pypy/pypy/blob/main/rpython/jit/metainterp/optimizeopt/unroll.py and I'm not going to hide it from you, but it's not a fun read.
17:40:08 <avih> thanks. i roughly get it, but not familiar with the subject and terminology in general.
17:40:36 <avih> (but i get that it represents some class of IRs which have some known capabilities)
17:40:58 <korvo> Oh! Okay, no worries. So, lucky 10000 for compiler engineers: most serious compiler backends use some sort of SSA. They have many many names for it, but the SSA property is really common. LLVM is a good example of a system that uses SSA.
17:41:24 <avih> s/capabilities/properties/
17:42:41 <avih> yeah, i thought that might be the case, though had zero specific knowledge of it till now (and even now only the name and a property which i get, but not yet "expanded" mentally)
17:42:55 <korvo> If you were to suffer through a university compilers course, you'd either be given SSA or CPS (continuation-passing style, mind-hurting, usually means the school teaches Haskell, Scheme, or OCaml) as your paradigm. Probably also *basic blocks*: a hunk of code that does some ALU operations and then (conditionally?) jumps to another block. These are not easy concepts but they are ubiquitous in modern compilers.
17:43:55 <avih> yeah, never got there myself :) i aborted my masters into the world of internet startups ;)
17:44:24 <avih> and never took compiler courses in undergrad either
17:45:16 <avih> was never into that really, but now, all those years later, bf brought me to it ;)
17:45:21 <korvo> I get it. I was poached out of undergrad by Google, and because I'd pivoted from a music degree, I missed out on a lot. I've been cramming lectures from Youtube and reading papers for years, but I'm sure that there's still little gaps in my knowledge.
17:46:19 <avih> who's knowledge doesn't have gaps? :)
17:47:07 <avih> but anyway, i do find it interesting, and hope i'll be able to implement the model of partial knowledge i described earlier. sounds cool to me, at least before the 1st line of code ;)
17:47:44 <avih> (i do have other bf code, but this is your standard patterns based)
17:48:23 <korvo> Yeah! Take your time; abstract interpretation is tough to understand, and I don't read French so I'm sure that I'm missing details in translation. But the overall approach that you're thinking of should work.
17:49:13 <avih> i think so too, the question is howhard it would be to turn it into reasonable code
17:49:52 <avih> and c might not be the best for that, but it's my current weapon of choice
17:50:15 <korvo> Just over the horizon, there's the field of *partial evaluation*. Given a Brainfuck program, and partial knowledge about the tape, we can produce a *residual* program that includes that knowledge and might be simpler or faster.
17:50:51 <avih> i get that
17:51:14 <korvo> This eventually leads to Futamura projections, which are what RPython actually does: we give RPython an interpreter and RPython gives us a compiler. There is a standard book on the topic, usually just called The Book, that you will want to read if you want to *hack* on any of these JIT toolkits: https://archive.org/details/springer_10.1007-3-540-47018-2
17:52:02 <korvo> Oh, yeah, C sucks. The main issue with C is that, since it's not memory-safe, you have to fight two battles; you have to make sure that your interpreter's behavior is correct and also you have to not mess up memory accesses.
17:52:04 <avih> it's another way to look at my model, but in some way also the exact way to look at it. because the instruction it produces don't necessarily align with its source. it only cares to produce the observable side effects. so things could be out of order, etc
17:52:42 <korvo> C also lacks for abstractions. Something like ML is almost inevitable, or at least ML-style modules: SML, OCaml, Haskell, etc.
17:53:46 <avih> not worried about memory access (as in, i rarely have such issues), but the challenge is making it do the work readably
17:54:37 <avih> i.e. write it in such a way that the abstraction is not buried at the low code levels, but rather out in the open
17:55:14 <korvo> The Book has a couple sections on partial evaluation of C, BTW. They prototyped a *self-applicable specializer*: a program written in C that can optimize C programs and produce residues in C. (The Book will also explain why specializers have exactly three associated languages.)
17:55:38 <avih> and of course, specify "the work" exactly first. currently i only have a mental model of what it does and how, but not yet the specifics
17:56:35 <avih> which book is that?
17:56:40 <avih> (or did i miss it?)
17:57:31 <korvo> "Partial Evaluation: Practice & Theory", particularly the free 1998 PDF from archive.org linked above.
17:57:50 <avih> the reason i like c is that it's probably the most portable language and relatively easy at that. i.e. you have a c compiler anywhere.
17:58:19 <avih> oh, i did miss that. thanks.
17:59:00 <korvo> I'd disagree with both of those, but I'm fairly biased; I've been part of multiple efforts to push memory-safe languages in the industry. C is a hack that arose because Unix was written for machines that didn't have enough memory to compile Fortran. C was what worked at the time.
17:59:12 <korvo> But sure, C's a decent enough prototyping language.
17:59:22 <avih> roughly the same with sh, btw, at the other end of the spectrum under some points of view :)
18:00:07 <avih> i find c and sh interestingly complementary, and together sort of complete.
18:00:48 <korvo> Well, yeah, shell's a terrible hack. But shell's also an example of a *toplevel*, a system prompt that is always shaped differently from the internal REPL because it has to support interactive actions. ML and Smalltalk traditions have them too.
18:01:34 <avih> yeah, though never used those. not even lisp et al.
18:02:04 <korvo> I'd like to link some advice from multiple language designers: https://lobste.rs/s/vs08dv/how_design_new_programming_language_from The topic is whether C is a good language for compilers, and we explain the nuances fairly well.
18:02:15 <avih> i mean, i know how it works etc, but never tried anything in it.
18:02:51 <korvo> (I'm the one in the green shirt. "...it implies that C is the preferred high-level prototyping language, which hasn't been true for decades and in fact might have never been true.")
18:02:55 <avih> korvo: "we" as in you're one of the authors?
18:02:59 <avih> :)
18:03:20 <avih> oh, i'm not making any such claims :)
18:03:54 <korvo> Yes. Andy C.'s another serious author; he works on Oils, a series of languages to replace shell. Drew is the author of Hare, a C competitor that I think is fundamentally flawed from the outset, and it's interesting to understand how Hare's limitations arise from his beliefs about C's practicality.
18:03:56 <avih> just stated why i liked it. i also like js, and tcl (to a degree), and probably more too.
18:04:55 <avih> I'm familiar with Oils, if that's the python thing which gets auto translated to c to produce a posix shell?
18:05:14 <korvo> c-cube is an OCaml maintainer. Danila F. teaches type theory and works on...Rocq, I think? S. Jamaan works on CHICKEN Scheme, related to Lisp and C.
18:06:13 <avih> i might have a look at some stage (at least skim it), but i don't expect to fully go over it (and the rest of the resources). but then, who knows...
18:06:57 <korvo> No worries. Language designers are used to having opinions about languages, but most folks don't really think about it other than what they need for the task at hand. Never forget: the average developer has 2.5yrs of experience and only knows either EMCAScript or Python.
18:07:09 <avih> i don't have a fixation for any specific language, but i'm more experience with imperative ones
18:07:17 <avih> +d
18:07:45 <avih> i like whatever i can use to get the job done.
18:09:19 <avih> and this tends to be things i have more experience with. not due to some inherent property of the language.
18:09:34 <korvo> I like whatever I can use to express the task succinctly and without weirdness. Short programs are good. Programs that could expose weirdness are bad. Weirdness is when a machine does things that are outside the intended programming model: https://en.wikipedia.org/wiki/Weird_machine
18:09:43 <avih> yup
18:10:26 <avih> (damn, 9s to respond, but i did go over all of it and did agree right away...)
18:11:33 <korvo> Oh, I'd have believed that you'd heard of this before. It was a big splash in security-research circles, and those of us into speedrunning had already known about the phenomenon for a while but didn't have a good name. Reference (1), Bratus.pdf, is a really good read.
18:12:24 <avih> was not aware of that splash. i just read it and i think the same
18:13:16 <avih> (i obviously did not visit the link, but i agree and get what it might have regardless)
18:14:08 <esolangs> [[EUCS]] https://esolangs.org/w/index.php?diff=168001&oldid=167726 * Hammy * (+231)
18:14:09 <korvo> https://langsec.org/papers/Bratus.pdf It's got Sassaman as a co-author! Might have been his final paper. Wonder if we can get people to call them "Sassaman machines" instead.
18:15:29 <int-e> Malbogoids
18:16:09 <esolangs> [[Turing Torment]] https://esolangs.org/w/index.php?diff=168002&oldid=167747 * Hammy * (+43) /* Commands? */
18:17:06 <int-e> (I do consider Malbolge to be a Weird Machine)
18:18:08 <esolangs> [[Turing Torment]] https://esolangs.org/w/index.php?diff=168003&oldid=168002 * Hammy * (+0) /* Examples, I guess? */
18:22:03 <avih> korvo: btw, do you think it's conceivable that rpython turns your output to the equivalent of printing a fixed string?
18:22:36 <korvo> avih: No, it's not structurally capable of doing that, I think.
18:22:40 <avih> (re https://bpa.st/GTNQ )
18:22:51 <ais523> int-e: because Malbolge doesn't have an intended programming model?
18:23:22 <int-e> ais523: yeah because it's basically designed to make looping programs humanly impossible to realize (and failed, but that took quite some time)
18:23:45 <ais523> I think a better viewpoint is that Malbolge *contains* a weird machine
18:23:56 <ais523> possibly more than once
18:24:02 <int-e> oh, maybe
18:24:22 <korvo> int-e, ais523: Taking this to its obvious conclusion, maybe the typical ISA is like a least-weird machine.
18:24:25 <ais523> the whole stable-instruction / stable-NOP thing seems like an intentionally constructed weird machine to help make programming in the language easier to understand
18:24:47 <avih> korvo: in such case, my (limited) experience with what gcc can do with various levels of bf-translated-to-c makes me think that it cannot be that fast. or at least i've not seen anything like it so far.
18:24:52 <ais523> I feel like freestart Malbolge is well within my programming capabilities at this point, but I have no idea how to get the program started
18:25:03 <korvo> Like, an instruction that is always encoded as a single constant byte and always performs an ALU op that can be written as a linear/affine whatever of register bits, only having local effects, is sort of a best case. (Or, as Zim would say, worst case?)
18:25:44 <ais523> korvo: of course, typical ISAs are often more complicated than that
18:26:03 <ais523> you kind-of need an extreme RISC to get something that's that conceptually simple
18:26:05 <korvo> avih: It works because of a Pareto phenomenon: most hot loops are small and the program spends most of its time in relatively little code. As long as the JIT does well on those hot loops, it does well on the whole program.
18:26:17 <ais523> also, transport-triggered architecutres are conceptually even simpler
18:26:18 <avih> it would be interesting to see what rpython produces which then gets executed.
18:26:44 <ais523> (except for how you do indirect memory access, which is conceptually complicated in one of those, you might need an extra level of indirection)
18:27:05 <avih> korvo: yeah, i do get that. as i said gcc does an amazing job on naively translated bf to c
18:27:08 <korvo> ais523: Sure. I'm thinking that you could see the typical ISA as like Malbolge, except that it's got the simplest kind of Malbolge-style encryption, and the clearest kind of Malbolge-style arithmetic, and...
18:27:25 <korvo> Just to imagine a poset of weird machines over a single physical object, really.
18:27:27 <avih> so i guess it's in the same neighborhood of things
18:27:51 <ais523> korvo: hmm, so I feel like the difficulty of Malbolge is in how the instructions interact with the encryption
18:28:08 <int-e> ais523: Malbolge's "mistake" is that it has too many NOP-s, making NOP-cycles quite likely in a random permutation.
18:28:22 <ais523> the instruction set isn't, in isolation, that terrible in an absolute sense, but it is really verbose
18:28:24 <korvo> avih: Sure. In that specific case, GCC has a structural advantage; they implement "polytope" or "polyhedral" modeling for loops. It's the kind of thing that horribly increases compiler times but enables this highly-generalized loop-unrolling behavior that you're imagining.
18:28:40 <ais523> and cares a lot about constants that Malbolge source code can't easily express
18:29:07 <avih> gotcha
18:29:11 <int-e> I mean, I don't know whether these are there by design... but I find it easy enough to believe that they're accidents.
18:29:22 <esolangs> [[ASTLang]] https://esolangs.org/w/index.php?diff=168004&oldid=167962 * NTMDev * (+1650)
18:29:27 <ais523> and this interacts with the encryption because the verbosity makes stable programs harder to write and the lack of expressing useful constants makes initialization harder (with initialization being the main reason Malbolge is difficult)
18:29:48 <ais523> most ISAs don't have this interplay between their semantics and their encoding
18:30:23 <ais523> int-e: well, there are plenty of other features of Malbolge which seem almost impossible that they're intentional
18:30:24 <avih> but i was not talking about this specific case. even in mandelbrot.b, gcc -O2 of 1:1 native bf-to-c translation is barely 2x-3x slower than very good bf-to-c result
18:31:08 <int-e> ais523: there are also way too many sheep and poodles in the clouds :P
18:31:22 <ais523> like encrypting the instruction before a jump target, or the undefined behaviour (and usually segfault in practice) you get if you self-modify an instruction when D=C, or the infinite loop when trying to execute an instruction outside the ASCII range
18:31:29 <avih> but where it does differ, is hanoi.b. in that case, naive translation can result in an order of magnitude difference (or more, i didn't estimate, but it's a huge diff)
18:32:35 <korvo> avih: So, have you read the pointer-propagation section yet? I only recently saw this, but I think that this is the true path forward. It unifies pointer movement and tape increments into a single big behavior, which should also unify some idiom-detection and loop-detection.
18:32:54 <esolangs> [[ASTLang]] https://esolangs.org/w/index.php?diff=168005&oldid=168004 * NTMDev * (-125)
18:32:56 <avih> i think i stopped just there. i did finish the monoid part.
18:33:13 <avih> but i'll read it.
18:33:18 <korvo> (Idiom detection is when we detect that a straight-line sequence of instructions, just ALU op after ALU op, can be turned into a smaller sequence of machine instructions. Loop detection is when we detect that a loop has a specific idiom inside.)
18:33:50 <esolangs> [[ASTLang]] https://esolangs.org/w/index.php?diff=168006&oldid=168005 * NTMDev * (+187) /* Conditionals / If Statements */
18:34:52 <avih> korvo: from initial glancing, i feel that goes over my head. i don't think i'm there for that yet.
18:35:08 <esolangs> [[ASTLang]] https://esolangs.org/w/index.php?diff=168007&oldid=168006 * NTMDev * (+162)
18:35:35 <avih> it does feel like an interesting point of view, but i'm not anywhere near getting it
18:36:09 <korvo> avih: It wasn't obvious for me either. Their insight is that we can always absorb any of the four essential actions, + - < >, into this representation. The monoid is just mathematical fluff.
18:37:18 <korvo> Like, suppose you've got a propagated pointer: you've got the offset, the adjustment, and the list of written tape cells. To absorb + or -, just inc/dec the written tape cell at the adjustment. To absorb < or >, just inc/dec the adjustment itself.
18:38:27 <korvo> (BTW, this conversation's the last thing keeping me at the computer. Let me know when you need a break so that I can take 30min to do something else.)
18:38:31 <avih> i mean, i get why a list of anything can represent any sequence of +-><, but not its significance
18:39:35 <avih> i feel like this is a point of view which can result in finding equivalences where otherwise they're not obvious, but not how
18:40:12 <korvo> It has to do with reducing the number of ways that a compiler represents an action. To detect an idiom or loop, the compiler needs to know every form of that idiom. [-] and [+] are equivalent. [->+<] and [>+<-] are equivalent. The number of forms quickly explodes; it's the main reason that I don't add more forms to bf.py.
18:40:21 <avih> korvo: go :) thanks for your time. been interesting :)
18:41:07 <avih> yeah, that's the sense i get as well, but nothing of the specifics of that
18:41:46 <korvo> No worries. This is giving me good ideas for how to improve the article. I'll be back in a bit.
18:41:53 <int-e> korvo: so you assume a finite cell size
18:41:56 <avih> :)
18:42:04 <int-e> (and wrapping arithmetic)
18:51:09 <esolangs> [[HTMHell]] N https://esolangs.org/w/index.php?oldid=168008 * Hammy * (+1107) Created page with "HTMHell (shortened to HTMH) is a weird esolang by [[User:Hammy]] that looks like HTML ==Commands== {| class="wikitable" |+ HTMH commands. |- ! Command !! Meaning |- | <!DOCTYPE HTMHELL> || Must be at the start of the program, if it isn't then start from where it is, if it
19:27:59 <esolangs> [[]] https://esolangs.org/w/index.php?diff=168009&oldid=166992 * * (+321)
19:28:20 <esolangs> [[]] https://esolangs.org/w/index.php?diff=168010&oldid=168009 * * (+10) /* Truth Machine */
19:31:28 <b_jonas> ais523: https://xkcd.com/1425/ . int-e: “Would you call a friend from half across the world? / If you’ll let us have his name and town and state, / You shall see and hear your crackling question hurled / Across the arch of heaven while you wait.” from 1911 (spoiler: vg vf gnyxvat nobhg genafngynagvp enqvb gryrtenz. vg orpnzr ninvynoyr ng ebhtuyl gur fnzr gvzr nf gur haqrefrn pnoyrf. ab
19:31:34 <b_jonas> ybat-qvfgnapr gryrcubar lrg.)
19:40:53 <ais523> ugh, it took way too long to read that rot13 without a translato
19:40:55 <ais523> * translator
20:01:05 -!- tromp has joined.
20:29:14 <b_jonas> "the average developer has 2.5yrs of experience" => what? how can that be true? do they leave to management after five years in average or something?
20:29:37 <b_jonas> or become rich in 5 years in average and retire forever?
20:29:38 <ais523> b_jonas: partly people dropping out, and partly that the rate of new programmers entering is increasing
20:29:48 <b_jonas> oh true
20:29:54 <ais523> so they form a disproportionate portion of the sample
20:30:53 <b_jonas> "way too long to read that rot13" => that's partly my fault, I deliberately rewrote it to remove some potentially revealing punctuation or capitals so it is spoilered better
20:32:58 <korvo> int-e: Yeah, precisely. I don't mind that particular choice so much, but it's still a choice.
20:33:59 <ais523> oddly I find that bignum BF is often easier to implement in esolangs
20:34:16 <ais523> wrapping at 256 can be quite difficult to implement because it's a large constant
20:34:20 <korvo> b_jonas: Exponential growth is really unintuitive when it comes to snapshots. Regardless of where we stand, it looks like the past is nearly flat and the future is growing unbelievably fast. But the median time that an individual has spent in the field is shrinking because individuals enter the field so quickly.
20:34:43 <ais523> and often your native way to represent numbers is either naturally unbounded, or not big enough to represent 256 so you have to use bignum-like tricks anyway
20:35:25 -!- pr1sm has quit (Remote host closed the connection).
20:35:43 <korvo> I suppose that one consequence of that is that just maintaining the relationships that you currently have, your cohort is going to be an exponentially-small corner of the ecosystem. In order to stay connected, one must endure a constant stream of young folks coming in with half-baked ideas.
20:41:55 <korvo> b_jonas: That said, if we take a slightly narrower cohort, of developers that are actively participating in Stack Overflow, then the median age jumps to about 15yrs of experience: https://survey.stackoverflow.co/2025/developers#education-experience-years-code This tells me something about SO's users compared to the median developer, but it's not surprising that SO users have more experience than average.
20:42:36 <b_jonas> korvo: that might be because people lie about how much experience they have, especially on SO, to get jobs
20:43:04 <korvo> ...There's also the possibility of a particular stats bias, *survivorship bias*, since people tend to wash out or burn out of software engineering.
20:44:06 <ais523> stack overflow is mostly a ghost town nowadays (which oddly has made it a much better website)
20:44:07 <int-e> korvo: huh, is it really unsurprising? one would think that as your experience grows you get increasingly less out of a platform like that
20:44:14 <korvo> Like, the typical tenure of a Google SRE is about 2yrs; the reason for that low number is because Google tends to run their SREs hot and burn them out. I lasted slightly longer than average.
20:44:21 <ais523> like, it's better to use than it was 5-10 years ago even despite the company in charge messing the site up
20:44:39 <b_jonas> korvo: sure, for Google it's believable
20:46:31 <korvo> int-e: I see a lot of old-timers giving extremely dated advice on the periphery of Stack Exchange, and I think that there's a kind of greybeards' refuge. But I could certainly imagine that if we were to sample users that don't use the rest of Stack Exchange then maybe they'd be younger.
20:47:02 <int-e> Like IRC then, I guess. Except IRC isn't so grossly gamified.
20:47:48 <int-e> (as far as old people go :P)
20:48:46 <ais523> I dislike the gamification on SO both for its own sake, and because the incentives are wrong
20:48:53 <korvo> Sure. The ancient Greek polis system had (sp?) "gerountia", basically "old man's club", where the old men would sit around and talk about the old days and tell war stories and try to make a good impression on the youth. Not a bad idea in general but *terrible* if used as part of government, as in Sparta.
20:48:57 <ais523> the most useful actions are not the actions that give the most points
21:56:15 <esolangs> [[User:Buckets]] M https://esolangs.org/w/index.php?diff=168011&oldid=167951 * Buckets * (+11)
21:56:47 <esolangs> [[Language list]] M https://esolangs.org/w/index.php?diff=168012&oldid=167952 * Buckets * (+12)
21:57:29 <esolangs> [[Wycq]] N https://esolangs.org/w/index.php?oldid=168013 * Buckets * (+311) Created page with "Wyq is an Esoteric programming language created by [[User:Buckets]] In 2021. {| class="wikitable" |- ! Command Example !! Instruction |- | Wyq A B || Memory[A] = Memory[A] nand Memory[B] then, Goto address (Memory[B] - 1). |} [[Category:Unknown computational class]] [[Categ
21:58:05 <esolangs> [[Wycq]] M https://esolangs.org/w/index.php?diff=168014&oldid=168013 * Buckets * (+68)
21:58:40 <esolangs> [[Special:Log/move]] move * Buckets * moved [[Wycq]] to [[Wyq]]
22:01:32 <avih> i don't recall how i got here (is it linked from one of the posted links earlier?), but this and the followup are interesting https://blog.regehr.org/archives/140 it's about whether program termination is considered an observable side effect (spoiler: yes in java, no in c++, unclear in c)
22:05:15 <ais523> avih: my optimising interpreter for The Waterfalll Model, if it recognises an infinite loop, compiles it into a deadlock
22:05:30 <ais523> err, not compiles, interprets it as
22:05:39 <avih> then in this case it's observable
22:05:52 <ais523> this seems to me like the sensible way to optimise infinite loops, I wonder why it isn't more common
22:06:15 <avih> but the general question is interesting, as well as the differences between languages and their effect on optimizations
22:07:15 <avih> i might know how i got there, probably while reading about compiler optimizations.
22:08:08 <avih> ais523: assuming it's not a side effect opens the door for further optimizations.
22:08:14 <ais523> I've been gradually trying to design an IR (not as a major project that's likely to finish any time soon, just as a curious spare-time thing) and realised that infinite loops are really hard to handle properly in an IR because they introduce what's effectively a side effect in a way that doesn't show up to most side-effect analyses
22:08:41 <avih> the ferme example is nice
22:08:47 <ais523> it'd make things much easier to say "if things calculated within an infinite loop don't affect the rest of the code, you can skip the loop"
22:08:51 <ais523> and C++ actually does that
22:09:34 <avih> yeah
22:09:38 <korvo> GHC Haskell prints out "<loop>" and exits if it ever finds a cycle in the part of the object graph that it is currently evaluating. The way it does this is very slick and not portable beyond GHC's VM, relying on Haskell's lazy semantics: lazy graph traversal can mark nodes when it first visits them, and prints "<loop>" when it visits a marked node.
22:11:10 <avih> korvo: this was mentioned at one of the comments. i think at the followup article
22:11:25 <ais523> this is allowed by the Haskell specification, which specifies that an abnormal termination and an infinite loop are considered the same outcome and a compiler can transform one to the other
22:11:28 <korvo> I don't really like non-termination. I would like for my computer to finish what it's doing, please.
22:12:39 <avih> sure, except that there are cases (and specific examples in these articles, including at the linux kernel) about use cases where not terminating is a desirable behavior
22:13:01 <avih> s/desirable/intended/
22:13:51 <ais523> there's another similar case which I find very annoying, in which a function call can contain a synchronisation barrier and expose data it clearly doesn't have access to
22:14:10 <avih> one example is firmware updater which enters infinite loop and expects the cpu watchdog to reboot
22:14:35 <ais523> let me try to fidn it
22:15:01 <avih> skim those articles. they have several examples
22:15:59 <korvo> avih: So, in terms of understanding the Haskell/ML way of thinking, there's an important old paper by the author of the Miranda language, "Total Functional Programming": https://ncatlab.org/ufias2012/files/turner.pdf This paper says that in addition to programs that always halt, we can write programs on *codata* which is infinite, and *never* halts.
22:16:32 <korvo> For all of the cases where we want to have an interactive machine that is constantly responding to a user, we can use codata. Maybe you disagree with this, but it's the meme that dons and others are implicitly bringing to the table.
22:17:32 <korvo> That said, I think that an infinite-loop intrinsic would be a fine compromise for a kernel or embedded app. I agree that sometimes an embedded device needs to enter a stuck state, and an intrinsic seems like a reasonable way to offer that control to the programmer without contaminating the language with *general recursion*.
22:17:38 <avih> i didn't express my opinion yet on this :) but without considering it deeply, i think (and thought also before reading these articles) that termination is an observable side effect
22:18:11 <avih> i did consider it while thinking about my bf optimization model
22:18:44 <zzo38> In the case of systems that have locking, any locks might be released if a program terminates but it might be held in a infinite loop otherwise
22:19:07 <avih> exactly
22:20:38 <avih> korvo: agreed, but c doesn't have an explicit halt instruction.
22:21:03 <ais523> found it: https://github.com/llvm/llvm-project/issues/64188
22:21:08 <korvo> avih: C lacks many of today's CPU's instructions. In general, it's really a language for a particular abstract machine rather than a language for the CPU.
22:21:25 <avih> for(;;); is such, but it's something the compiler deduces. not something the developer can express explicitly
22:21:51 <b_jonas> avih: I don't know how I got here (as in on this channel, as opposed to this IRC network) either
22:22:03 <ais523> this has been bothering me so much, it's making me think that acquire/release are the wrong abstractions for software (despite being correct for hardware)
22:22:16 <ais523> b_jonas: I kind-of assumed you followed me, but wasn't sure
22:22:37 <zzo38> (For this reason, as well as the case of knowing that a capability is no longer usable, in my hypothetical computer design, halting can be distinguished from infinite loops without I/O, although in some cases a program might halt without intending to, such as waiting for a condition that the kernel knows is impossible (e.g. for any one of an empty set of objects to be ready).)
22:22:43 <avih> b_jonas: to that article :)
22:25:03 <avih> anyway, i'm out for now. just wanted to drop that link because i found it interesting, and related to the earlier conversation in some way, and to this channel. later :)
22:26:16 <APic> cu
22:26:20 <korvo> Peace.
22:26:40 <APic>
22:26:48 <avih> .
22:27:03 <avih> (really tiny peace symbol)
22:27:09 <korvo> zzo38: Oh, that's interesting to think about. With E-style message delivery (so-called "E-order" of deliveries) there's always a possibility for deadlock simply because we can create a promise which resolves to itself.
22:28:00 <korvo> In Vixen, I suppose that the analogue would be some sort of omega-shaped execline script that constantly tries to invoke and wait for a copy of itself. Not gonna write that out because it sounds like a fork bomb.
22:28:05 * korvo lunch &
22:30:48 <b_jonas> ais523: the C++ rule that an infinite loop without side effects is undefined behavior bothers me. if I write a program that tries to find a counterexample for the Goldbach conjecture, and the C++ compiler is smart enough to prove that there's no such counterexample, then it can make the program do anything, such as lie that it found a specific counterexample, or spawn demons. and if I write the program
22:30:54 <b_jonas> such that it doesn't even print the counterexample, it only prints that it found one, then the compiler doesn't even have to prove that there's no counterexample, it can just make the program lie that it found a counterexample regardless. this gets worse if I write an interpreter in C++ for a toy language. the compiler can compile my interpreter such that it works most of the time but may UB for some
22:31:00 <b_jonas> infinite loops in the interpreted program. to avoid this I have to deliberately insert something that has a side effect (possibly in only some infinitely many iterations) to the loop body,
22:31:30 <ais523> b_jonas: this is basically what avih's link above was about
22:32:23 <b_jonas> and I'm not even sure what I should insert that is considered a side effect by the C++ standard but that the compiler can optimize away. does calling time() count as a side effect? normally that won't evne issue an actual kernel system call, it's one of the system calls optimized by magic. how about select(0,0,0,0)? is the compiler allowed to know that it has no side effect and so not count it as an
22:32:29 <b_jonas> observable side effect? a volatile write counts as a side effect, but the compiler probably also isn't allowed to optimize away the write.
22:33:50 <b_jonas> if I want to definitely not terminate that's easy, I can just write while(1) { pause(); volatile int m = 0; m = 1; }
22:34:03 <b_jonas> but that's a kind of trivial case
22:35:05 <ais523> I think this problem is actually unfixable from within C++ – ideally you would want a volatile read that gets optimized out, but I'm not even sure that concept makes sense
22:35:53 <ais523> you could write a function call to a function whose behaviour isn't visible to the calling code, but that couldn't be optimized out and would be slower than a volatile read
22:36:45 <b_jonas> "c doesn't have an explicit halt instruction" => it doesn't, but unix has pause(). it's a funny one because this is basically the *only* reasonable use for the pause() system call.
22:37:26 <b_jonas> but even if you didn't have pause, which is probably a historical accident, you can use sigwait or select with a long timeout etc
22:40:55 <b_jonas> ais523: yes, a volatile read might work
22:46:42 <ais523> b_jonas: if you had a program that ran entirely in signal handlers
22:46:57 <ais523> you could make pause in a loop the body of main, after setting the signal handlers up
22:47:05 <ais523> (this is probably a bad idea but it would at least be reasonable)
22:48:37 <b_jonas> volatile has some other advantages: if you have a slow pure computation and want to measure rough time of how slow then you can write the results of the computation into a volatile to make sure the computation is just optimized away. also even if it's not optimized away completely, a volatile write is at least cheap.
22:48:43 -!- amby has quit (Ping timeout: 256 seconds).
22:49:10 <b_jonas> eg. it's cheaper than trying to do a system calls to ensure a side effect or something like that
22:49:56 <ais523> not standard C++, but thinking about it an empty asm volatile would probably work
22:50:34 <b_jonas> also volatile writes in C and C++ are very portable
22:50:57 <b_jonas> ais523: yes, that's technically dependent on the CPU type and compiler, but probably works decently
22:51:20 <ais523> most asms permit the null string as a no-op
22:51:50 <ais523> although, the compiler might not necessarily be able to pass it to the assembler correctly
22:52:07 <b_jonas> yeah, but I'm not convinced that the compiler will know the asm syntax on all CPU architectures
22:52:32 <b_jonas> like if you're compiling for an uncommon architecture or with an uncommon compiler or compiler options
22:53:25 <b_jonas> and it's also harder to input a dependency into an inline asm portably so that it can't be reordered to before the value is computed
22:54:46 -!- tromp has quit (Quit: My iMac has gone to sleep. ZZZzzz…).
22:55:46 -!- amby has joined.
23:05:58 <zzo38> I did write a program once that used pause() because I wanted to wait for a signal to set a variable before the program does anything else
23:10:25 <ais523> zzo38: doesn't that go wrong if the signal arrives before the pause() call?
23:11:17 <ais523> you would need a ppause() instead to avoid the race condition (which exists, but for some reason is called sigsuspend() instead)
23:12:01 <zzo38> A while loop is used, so if a different signal arrives then it will pause() again
23:15:51 <ais523> zzo38: no, the other way round
23:16:09 <ais523> if the signal you're waiting for arrives before the program enters the pause() call
23:16:09 -!- pool has quit (Read error: Connection reset by peer).
23:16:19 <ais523> then pause() wlll be waiting for something that has already happened
23:16:55 <zzo38> O, that is a valid point, if it occurs between checking the loop condition and calling pause()
23:18:13 -!- pool has joined.
23:19:58 -!- Sgeo has joined.
23:23:37 -!- sftp_ has joined.
23:24:55 -!- sftp has quit (Ping timeout: 264 seconds).
23:24:56 -!- sftp_ has changed nick to sftp.
23:24:56 -!- sftp has changed hostmask to ~sftp@user/sftp.
←2025-11-11 2025-11-12 2025-11-13→ ↑2025 ↑all