00:22:50 -!- ajal has quit (Quit: so long suckers! i rev up my motorcylce and create a huge cloud of smoke. when the cloud dissipates im lying completely dead on the pavement).
01:05:55 -!- Sgeo__ has joined.
01:08:55 -!- Sgeo_ has quit (Ping timeout: 246 seconds).
01:55:29 <esolangs> [[Special:Log/delete]] delete * Ais523 * deleted "[[User:Syzygy]]": redirect left over after a page created in the wrong namespace was renamed to the correct namespace
01:56:05 <esolangs> [[Special:Log/move]] move * Ais523 * moved [[Esolang:Syzygy]] to [[Syzygy]]: history merge to Syzygy
01:56:05 <esolangs> [[Special:Log/delete]] delete * Ais523 * deleted "[[Syzygy]]": Deleted to make way for move from "[[Esolang:Syzygy]]"
01:56:26 <esolangs> [[Special:Log/delete]] restore * Ais523 * undeleted "[[Syzygy]]": part two of history merge
01:56:52 <esolangs> [[Syzygy]] https://esolangs.org/w/index.php?diff=166113&oldid=166112 * Ais523 * (+9931) set top revision after history merge
01:57:42 <esolangs> [[User talk:Ais523]] https://esolangs.org/w/index.php?diff=166114&oldid=166101 * Ais523 * (+246) /* Esolangs forums? */ it didn't work last time we tried it
02:02:13 -!- ais523 has joined.
02:03:53 <ais523> korvo: I think that Basic Stack is TC by using the register and top of stack as two counters
02:05:09 <ais523> the TC proof on the page is wrong I think, it's expecting "push reg" to push the reg-th stack element from the bottom (using it like a pick instruction which would be an easy way to get around the usual Turing-completeness issues for stack-based languages), but it actually pushes the register itself
02:06:53 <ais523> <korvo> Oh, I don't have permissions to remove the redirect in the main namespace. I guess that I will *not* be able to fix that, sorry. ← an incorrect move always needs admin help to fix if anything happens to the resulting redirect (including being edited)
02:07:28 <ais523> this is possibly a design flaw of MediaWiki – I know I have historically spent a lot of time fixing broken moves both on Esolang, and on Wikipedia when I was an admin there
02:08:46 <ais523> <FireFly> do we still have a bot we use for @tell functionality here? I forget ← I usually try to read the entire logs, although I probably miss lines occasionally
02:09:06 <ais523> conversation in #esolangs is often asynchronous nowadays, with people conversing through the logs
02:10:53 <ais523> <fizzie> But I see lambdabot's still here, and if I recall correctly, it could also pass on messages. ← I wonder whether libera has memoserv? that got used on Freenode on occasion
02:11:20 <ais523> [Whois] MemoServ is MemoServ@services.libera.chat (Memo Services)
02:11:31 <ais523> I imagine MemoServ messages may be easy to miss if you aren't expecting them, though
02:13:33 <ais523> <fizzie> You can in fact see the fnord in the code: https://github.com/fis/fungot/blob/master/fungot.b98#L157 ← that anchor annoys me, befunge really wants two-dimensional anchors, it isn't designed for one-dimensional anchors unless you program specifically to make it work
02:13:33 <fungot> ais523: mr president, ladies and gentlemen, i should like to thank the rapporteur on her diligence and her persistence during the many debates that have taken place in committee; that is why i regard this particular proposal but we will not neglect the interest of food safety, must be respected in any coordination process. it is not sufficient for the president of the republic of armenia, azerbaijan and georgia.
02:30:41 <korvo> ais523: No worries. Thanks for your patience with us.
02:40:58 <esolangs> [[Interbflang]] N https://esolangs.org/w/index.php?oldid=166115 * TheBigH * (+2191) Created article.
02:42:57 <esolangs> [[User:TheBigH]] M https://esolangs.org/w/index.php?diff=166116&oldid=165363 * TheBigH * (+170) Added interbflang.
02:53:37 <Sgeo__> Burroughs Algol 60 has an "IMP" relational operator (for implies).
02:54:26 <korvo> Nice. It's not common; the only language that comes to mind for me is Nix.
02:56:13 <ais523> x86alikes have ANDN which is the opposite of an implies
02:56:38 <ais523> although the argument order is very confusing: x ANDN y is "not x and y" which is the opposite of what you'd expect from the name
03:03:09 <Sgeo__> Was looking at https://bitsavers.org/pdf/burroughs/LargeSystems/B5000_5500_5700/5000-21001-D_An_Introduction_to_Algol_60_for_the_B5000_Information_Processing_System_196112.pdf and now watching https://www.youtube.com/watch?v=T-NTEc8Ag-I before I go back to reading it. It's starting to strike me how Algol influenced C and some BASIC dialects (returning values by assigning to the function's name)
03:27:38 <Sgeo__> Algol 60 switch statements are weird. IIUC they're targets for GO TO statements
03:30:51 <ais523> I think it benefits practical languages to have a return variable (i.e. something you can assign to in order to set the return value, possibly multiple times in the function/procedure/subroutine), *but* that its use should be optional and there should be return statements too as shorthand
03:31:20 <ais523> I commonly end up having to create return variables, so it would be nice to have a convention for them, but you don't always need one
03:34:38 <Sgeo__> So, which CPUs were designed with specific languages in mind? Burroughs mainframes for ALGOL, Lisp machines for Lisp, basically everything today for C
03:52:35 <b_jonas> Sgeo__: no, that's backwards. C was designed for the existing and near future CPUs, not the CPUs for C.
03:53:09 <Sgeo__> I kind of have the impression that no CPU design would be made today that isn't a good fit for C
03:54:02 <b_jonas> yes, but that's not because of C, it's to be able to run the programs that were designed to run on existing CPUs. I don't think C is relevant there.
03:55:06 <b_jonas> eg. the CPUs have to support a flat memory space addressible in bytes, because existing programs assume a flat memory and that is often baked so much into programs that it would be hard to chane
04:10:06 <ais523> fwiw I think the concept of different types of memory (rather than a flat address space) is a useful one and can help make programs more secure and easier to reason about
04:10:12 <ais523> but existing segmented architectures might not fit it will
04:11:04 <ais523> I realised recently that it makes sense to have indexable and non-indexable allocations (in non-indexable allocations the only pointer arithmetic allowed is field projections), with each array in its own indexable allocation
04:11:50 <ais523> because if you don't do that, then most existing memory-safety retrofitters don't work properly because they don't prevent a buffer overflow that stays within the allocation and hits something that's stored in the same structure as the buffer
04:12:19 <ais523> (I suppose that in indexable allocations, pointer arithmetic should be limited to offsetting a multiple of the element size, plus field projections)
04:14:11 <ais523> I also realised recently that a) the main technical problem in writing an efficient memory allocator nowadays is deallocating memory on a different thread it was allocated on, b) programs usually don't need to actually do that
04:15:26 <ais523> so it would make sense to enforce that statically (in Rust you can do that by using a custom allocator that isn't Send) and that effectively gives you a different address space for each thread (they can read and write each other's spaces, but not allocate and deallocate)
04:23:22 <zzo38> BASIC also has "IMP" relational operator and the "return variable"
04:32:36 <Sgeo__> I feel like I've seen IMP on some BASICs, but I think only some BASICs have functions with return variables like that
04:32:50 <Sgeo__> There's a lot of variety in BASICs
04:34:17 <b_jonas> yes, and BASICs often have features or non-features that seem really weird to me
04:35:22 <Sgeo__> QBasic has both IMP and functions that return values by setting the name
04:36:04 <zzo38> Yes, different versions of BASIC are different, but I specifically mean Microsoft BASIC (although I think most of the implementations at one time were from Microsoft?)
04:37:32 <Sgeo__> Most of the implementations on microcomputers were from Microsoft. Mainframes and minicomputers had their own
04:38:07 <zzo38> I wrote http://esolangs.org/wiki/User:Zzo38/Programming_languages_with_unusual_features#BASIC but other things that you think are remarkable might also be mentioned (and/or the existing explanation changed)
04:38:25 <Sgeo__> And I think "Microsoft BASIC" is itself ambiguous. There's the version on early microcomputers, then QBasic and QuickBASIC are a lot more full featured
04:39:02 <zzo38> Yes, I think you are correct
04:41:20 <Sgeo__> Was going to post about ALGOL 60 but I don't fully understand its switch statement yet.
04:42:16 <Sgeo__> There's a construct that's common to languages older than a certain point and uncommon to languages after that point, that I think counts as unusual to modern eyes: Taking an integer and doing something based on a list in that statement
04:43:06 <ais523> Sgeo__: I think languages just became higher-level over time – there's an instruction that's very much like that in both JVM bytecode and LLVM IR
04:44:21 <ais523> so it's still commonly used as something to compile into, it's probably important for performance that an instruction like that exists – it's just too low-level to be ergonomic to use directly
04:56:05 <zzo38> There are often lower-level stuff other than assembly language that I will want to use but C does not do it. People have tried to make better programming languages than C but often make it worse in many ways.
05:35:31 <Sgeo__> Algol 60 uses "ENTIER" for what modern languages call "floor"
05:38:58 <Sgeo__> This book has a .. curious statement, trying to figure out if it's correct
05:40:54 <Sgeo__> Yeah, it is. Just unusually written
05:41:36 <Sgeo__> "it is useful to be aware of the relationship LOG_10 (X) = LOG_10(e) x LN (X)
05:41:59 <Sgeo__> I'm more used to change of base being written as log_10(x) = ln(x) / ln(10)
05:58:11 <ais523> I think even floating-point divisions are slow
05:58:16 <ais523> even on modern hardware
05:58:41 <ais523> so a performance-minded programmer of the day would have preferred a formula that used multiplication to one that used a division
05:59:28 <ais523> (it isn't quite correct to constant-fold ln(x) / ln(10) into ln(x) × (1 / ln(10)) – modern compilers will do that with fast-math-like optimisations but not if compiling accurately)
06:00:00 <ais523> (so if you want the better performance you have to write the 1/ln(10) manually, which is log_10(e))
06:08:32 <Sgeo__> I wasn't previously aware that the reciprocal switches base and argument like that, although I think it makes sense with change of base
06:09:20 <Sgeo__> log_10(e) = ln(e)/ln(10) = 1/ln(10)
06:13:17 -!- citrons has quit (Quit: Reconnecting).
06:13:26 -!- citrons has joined.
06:13:28 -!- citrons has quit (Client Quit).
06:13:38 -!- citrons has joined.
06:27:15 -!- Sgeo__ has quit (Read error: Connection reset by peer).
06:28:46 -!- ais523 has quit (Ping timeout: 246 seconds).
06:28:56 -!- ais523 has joined.
06:29:05 -!- tromp has joined.
06:36:33 <zzo38> Floating division is 40 cycles on MMIX, which is faster than integer division but slower than other operations with floating point numbers (other than square root, which is also slow).
06:57:21 -!- ais523 has quit (Ping timeout: 246 seconds).
06:57:27 -!- callforjudgement has joined.
07:16:10 -!- tromp has quit (Quit: My iMac has gone to sleep. ZZZzzz…).
08:03:29 -!- callforjudgement has changed nick to ais523.
08:44:33 <strerror> Right, there's a page for it in the MMIX document. (On a physical architecture, this would be much longer.)
09:03:31 <ais523> ironically modern processors don't have cycle counts in the traditional sense, due to all the out-of-order stuff going on
09:03:55 <ais523> the closest you can get is a minimum latency, but if you try to use it like a traditional cycle count you'll get completely the wrong result
09:48:20 <esolangs> [[Sigq]] https://esolangs.org/w/index.php?diff=166117&oldid=165586 * TheSpiderNinjas * (+60)
09:50:18 -!- tromp has joined.
10:35:41 <esolangs> [[User:NoWhy]] M https://esolangs.org/w/index.php?diff=166118&oldid=165938 * NoWhy * (+37) link to personal drafts
10:50:12 <strerror> The MMIX document already knows about pipelining: “we must remember that the actual running time might be quite sensitive to the ordering of instructions. For example, integer division might cost only one cycle if we can find 60 other things to do between the time we issue the command and the time we need the result …”
10:51:33 <strerror> And the meta-simulator can simulate “… such things as caches, virtual address translation, pipelining and simultaneous instruction issue, branch prediction, etc.” But not OOO execution.
10:55:43 <strerror> But OOO also causes problems, including security problems, and we might get rid of it eventually. I think GPU architectures still don't bother with it.
10:59:15 <ais523> it's unlikely to be dropped in CPUs any time soon – the last serious attempt to get rid of it almost destroyed Intel
10:59:54 <ais523> (and it isn't nearly as bad as speculative execution when it comes to security issues)
11:00:45 <ais523> today's compilers wouldn't work very well without OOO and yet they're pretty entrenched, so no big CPU manufacturer is likely to take a risk on trying to change their CPUs in a way that would invalidate all the existing compiler technology
11:03:06 <ais523> the other big advantage of OOO is that it allows commands to take variable lengths of time to run without losing most of the optimisation opportunity from pipelining them correctly
11:03:13 <b_jonas> "the last serious attempt to get rid of [out of order execution] almost destroyed Intel" => do you mean the I64 architecture or the low powered x86 cpus with the simpler pipeline?
11:03:34 <ais523> b_jonas: i64 – Pentium IV was earlier and Intel mostly survived it
11:04:40 <ais523> OOO seems unavoidable for systems that have hardware-managed caches to run at top speed – you'd have to explicitly do the cache management in software without it
11:05:04 <ais523> which GPU programs do actually do, but for CPU programs you'd have to change all the existing source, not just the compilers
11:06:34 <b_jonas> ais523: I mean Intel Atom, not pentium 4
11:07:08 <ais523> hmm, I'm not too familiar with the intentionally low-powered Intel processors
11:07:23 <ais523> I vaguely remember that later versions of the Atom added it back?
11:07:48 <ais523> in which case the attempt can be said to have failed
11:17:17 <b_jonas> I wonder, perhaps in a CPU architecture unlike x86, where you have lots of registers and so most instructions don't read or write the main memory with the cache hierarchy so there are separate memory read/write instructions, could you have something like x87 where you can split memory reads explicitly to two instructions, one that initiates the memory read and one that waits for it to complete and gives
11:17:23 <b_jonas> you access to the value read? then you could perhaps have no out of order execution other than that and maybe some similarly split slow multiplication/division/square root instructions
11:20:08 <ais523> b_jonas: I discussed a CPU design like that in here a while ago (probably years ago now)
11:20:21 <ais523> where the idea is that instructions state a time by which the result is needed
11:20:52 <ais523> (and this is used to automatically route the result to the correct instruction, because you say "this result is the input is to the 10th-next instruction" or the like)
11:21:50 <ais523> it's most important for jumps because you can use it to avoid speculative execution (potentially entirely, if the delayed-goto happens early enough)
11:23:17 <b_jonas> perhaps you can even have small register arrays that are larger than 64 bytes but you can only use piecewise or rotate, so that you don't have to access memory that often
11:23:48 <ais523> I think something like that is valuable for spills
11:24:03 <ais523> especially in recursive code
11:24:25 <ais523> (non-recursive code can spill into statics, and IIRC were even commonly compiled that way a long time ago)
11:28:04 <b_jonas> I mean it could be useful even in cases where they don't spill, just have a fixed size. Today on x86 you just rely on the well working L1 cache for that.
11:28:41 <b_jonas> with indexed memory access which almost all instructions can do with one operand
11:29:38 <ais523> AMD Zen 2 and Zen 4 (but not Zen 3) are able to access spill slots as though they were registers (same performance characteristics), which is interesting
11:30:02 <ais523> and seems like the same sort of thing in reverse (possibly a means of repurposing syntax that compilers already generate)
11:35:00 <b_jonas> I'm also thinking of 6502 which has zero page memory access as sort of a replacement for registers, even though memory accesses all take the same amount of time regardless of the address, but the zero page can still save a cycle or two of fetching the instruction.
11:35:44 <b_jonas> but I think in a modern architecture you don't want that zero page to be modifiable by normal memory access instructions
11:36:07 -!- tromp has quit (Quit: My iMac has gone to sleep. ZZZzzz…).
11:36:57 -!- Lord_of_Life has quit (Ping timeout: 252 seconds).
11:37:00 -!- Lord_of_Life_ has joined.
11:38:19 -!- Lord_of_Life_ has changed nick to Lord_of_Life.
11:38:56 <ais523> well, the 6502 is often used with very constrained memory
11:39:20 <ais523> the zero page might be a significant proportion of the memory you have, so you might want to be able to put normal variables there in addition to registers
11:39:33 <ais523> (especially as 256 registers is more than most programs will need)
11:40:35 <ais523> it was very common to store data in static addresses in the range that hardware uses for the stack, and just try to keep the stack usage low enough that the data wouldn't be overwritten
11:47:16 <HackEso> 352) <olsner> as always in sweden everything goes to a fixed pattern: thursday is queueing at systembolaget to get beer and schnaps, friday is pickled herring, schnaps and dancing the frog dance around the phallos, saturday is dedicated to being hung over \ 821) <kmc> no christmas without christ, no thursday without thor
11:56:25 <esolangs> [[Special:Log/upload]] upload * Zapcircuit * uploaded "[[File:Subscratch handdrawn.png]]"
12:11:02 <int-e> so apparently a "tiny" model has several millions of parameters
12:12:44 <int-e> (cf. https://arxiv.org/abs/2510.04871 which is retro in another fun way: They found that if they go above 2 layers (so 1 hidden layer) they suffer from overfitting.)
12:23:49 <int-e> But really "tiny* should be reserved for models that are way closer to https://en.wikipedia.org/wiki/Caenorhabditis_elegans in size (it features a "brain" made of 302 neurons)
12:29:54 <fizzie> The model I'm using for esolangs is gemma-2.0-2b-it-sfp, which has 2 billion parameters, and I thought that too is considered "relatively small". It was the smallest Gemma 2 variant they had.
12:29:57 <fizzie> Though looks like since then they've released Gemma 3, which comes in 270M/1B/4B/12B/27B size variants.
12:30:52 -!- ais523 has quit (Quit: quit).
12:32:05 <fizzie> It's also got a longer context window (32k for 270M/1B sizes, 128k for 4B/12B/27B sizes, compared to 8k for Gemma 2), so I could fit more wiki text in (and make it even slower).
12:32:27 <fizzie> (Really, though, if I wanted it to produce actually useful wiki-derived responses, it's the retrieval part that needs more work.)
12:36:26 <esolangs> [[Subscratch]] N https://esolangs.org/w/index.php?oldid=166120 * Zapcircuit * (+12709) Created page with "'''subscratch''' is an [[OISC]] language invented by User:Zapcircuit. its main purpose is for codegolfing games in [[scratch]]. its most interesting feature is its scratch implementation, which uses very few scratch blocks. ==implementation== to the right is an
12:37:36 <esolangs> [[Subscratch]] https://esolangs.org/w/index.php?diff=166121&oldid=166120 * Zapcircuit * (+4)
12:41:37 <esolangs> [[Subscratch]] https://esolangs.org/w/index.php?diff=166122&oldid=166121 * Zapcircuit * (+8) /* execution */
12:42:41 <esolangs> [[Subscratch]] https://esolangs.org/w/index.php?diff=166123&oldid=166122 * Zapcircuit * (+1) /* execution */
12:53:07 <esolangs> [[Subscratch]] https://esolangs.org/w/index.php?diff=166124&oldid=166123 * Zapcircuit * (+20) /* i/o */
12:56:52 <esolangs> [[Special:Log/newusers]] create * Sadran * New user account
12:56:56 <esolangs> [[Language list]] https://esolangs.org/w/index.php?diff=166125&oldid=165996 * Zapcircuit * (+17) /* S */
13:01:48 -!- tromp has joined.
13:05:45 <esolangs> [[Subscratch]] https://esolangs.org/w/index.php?diff=166126&oldid=166124 * Zapcircuit * (+228)
13:07:31 <esolangs> [[Subscratch]] https://esolangs.org/w/index.php?diff=166127&oldid=166126 * Zapcircuit * (+9)
13:14:31 <esolangs> [[Subscratch]] https://esolangs.org/w/index.php?diff=166128&oldid=166127 * Zapcircuit * (+64) /* i/o */
13:17:24 <esolangs> [[Subscratch]] M https://esolangs.org/w/index.php?diff=166129&oldid=166128 * Zapcircuit * (-1) /* i/o */
13:29:30 -!- Everything has joined.
13:41:45 <korvo> strerror: Right. On a GPU (at least the 2000s-era ones I know well) instructions can't really be reordered because they're being executed in parallel on multiple data. Instead the GPU has a bitmask which indicates the result of the most recent comparison, and that mask is used to disable execution for some of the parallel lanes whenever a comparison fails.
13:43:37 <korvo> The "speed" is wholly from parallelism; in my mind a GPU only goes at maybe 300-350 MHz of clock, maybe 1/10 of the main CPU's clock, and also there's a 30% or so slowdown just from the overhead of transferring data over PCI/AGP/etc. This means you'd better have a batch of at least 10 items *and* a non-trivial workload before the GPU is worth it.
13:44:13 <korvo> (Highly likely that you know all this. But maybe some lurker does not.)
14:12:26 -!- Sgeo has joined.
15:05:20 -!- Lord_of_Life has quit (Excess Flood).
15:08:06 -!- Lord_of_Life has joined.
15:12:19 -!- tromp has quit (Quit: My iMac has gone to sleep. ZZZzzz…).
15:19:28 <strerror> Perhaps more relevantly to text, a “tiny stories” model has ~30M parameters: https://arxiv.org/abs/2305.07759v2
15:21:27 <strerror> (Though a tiny model for esolangs wouldn't have a vocabulary considered suitable for bedtime stories.)
15:23:32 <esolangs> [[Syzygy]] https://esolangs.org/w/index.php?diff=166130&oldid=166113 * Aadenboy * (+2) ordering image under infobox and moving table of contents to after the overview
15:24:42 <esolangs> [[Syzygy]] https://esolangs.org/w/index.php?diff=166131&oldid=166130 * Aadenboy * (-84) nvm doesn't work like that. also fixing header levels
15:25:12 -!- amby has joined.
15:27:39 <strerror> korvo: I prefer to say that GPUs aren't fast, it's the von Neumann chips that are plodding along
15:28:23 <korvo> strerror: Yeah! Some days I think that the computer is actually the memory controller, and the CPU is just a peripheral ALU.
15:28:26 <strerror> Though even GPUs are bottlenecked by memory these days. Still hoping for CIM to become usable. They're pretty esoteric too, since everything has to be done using bitslicing.
15:29:02 <korvo> A GPU is just another peripheral on a bus. Like the CPU, it's slower than memory, and like the CPU, it will ask for lots of DMA. That's what the computer does, really: DMA all day.
15:29:28 <esolangs> [[Talk:1 Bit, a quarter byte]] M https://esolangs.org/w/index.php?diff=166132&oldid=165413 * TheBigH * (+250)
15:33:19 -!- tromp has joined.
15:33:21 <strerror> (CIM = Compute-in-memory, which adds a few extra word lines to a DRAM circuit to do elementary logic operations across a row, which typically has 64K bits or more.)
15:46:45 <korvo> CIM sounds nice, but I'm not sure how it would get rolled out to consumers. I suppose that first the memory controller would support it, then the CPUs in the next generation would use it?
15:53:38 <strerror> They're still working on throughput, AFAIK; DRAM is made in the DRAM factory, not the logic factory, and they're not used to making chips with fast clock rates.
15:54:17 <strerror> If it gets fast enough, presumably OpenAI could be counted on to buy out the first year of production.
15:59:42 <esolangs> [[User:Aadenboy/Sandbox]] https://esolangs.org/w/index.php?diff=166133&oldid=161119 * Aadenboy * (-62)
16:00:07 <esolangs> [[User:Aadenboy/Sandbox]] https://esolangs.org/w/index.php?diff=166134&oldid=166133 * Aadenboy * (+8)
16:00:20 <esolangs> [[User:Aadenboy/Sandbox]] https://esolangs.org/w/index.php?diff=166135&oldid=166134 * Aadenboy * (-8)
16:00:48 <esolangs> [[User:Aadenboy/Sandbox]] https://esolangs.org/w/index.php?diff=166136&oldid=166135 * Aadenboy * (+27)
16:01:34 <esolangs> [[User:Aadenboy/Sandbox]] https://esolangs.org/w/index.php?diff=166137&oldid=166136 * Aadenboy * (+35) revert
16:25:17 -!- JGardner has changed nick to jgardner.
17:20:15 -!- joast has quit (Quit: Leaving.).
17:43:58 -!- tromp has quit (Quit: My iMac has gone to sleep. ZZZzzz…).
17:57:35 -!- Everything has quit (Quit: leaving).
18:22:05 -!- amby has quit (Read error: Connection reset by peer).
18:22:23 -!- amby has joined.
19:13:10 <esolangs> [[8ial]] M https://esolangs.org/w/index.php?diff=166138&oldid=146912 * Ractangle * (-62) /* Commands */
19:13:45 <esolangs> [[8ial]] M https://esolangs.org/w/index.php?diff=166139&oldid=166138 * Ractangle * (+0) /* Syntax */
19:20:55 <esolangs> [[8ial]] M https://esolangs.org/w/index.php?diff=166140&oldid=166139 * Ractangle * (-41) /* Syntax */
19:32:46 -!- sorear_ has joined.
19:36:58 -!- sorear has quit (Ping timeout: 248 seconds).
19:37:00 -!- sorear_ has changed nick to sorear.
19:57:23 <esolangs> [[8ial]] M https://esolangs.org/w/index.php?diff=166141&oldid=166140 * Ractangle * (-56) /* Truth-machine */
20:05:06 <esolangs> [[8ial]] M https://esolangs.org/w/index.php?diff=166142&oldid=166141 * Ractangle * (-47) /* Cat program */
20:10:25 <esolangs> [[8ial]] M https://esolangs.org/w/index.php?diff=166143&oldid=166142 * Ractangle * (-30) /* Syntax */
20:11:54 <esolangs> [[8ial]] M https://esolangs.org/w/index.php?diff=166144&oldid=166143 * Ractangle * (+157) /* Interpreter */
20:14:25 <esolangs> [[8ial]] M https://esolangs.org/w/index.php?diff=166145&oldid=166144 * Ractangle * (+11) /* Interpreter */
20:16:23 <esolangs> [[Talk:8ial]] N https://esolangs.org/w/index.php?oldid=166146 * Ractangle * (+295) Created page with "ok this time Kaveh you don't need to apoligise because of the fact your interpriter as of 16th of October has outdated specifactaions~~~"
20:17:48 <esolangs> [[8ial]] M https://esolangs.org/w/index.php?diff=166147&oldid=166145 * Ractangle * (+73)
20:23:36 <esolangs> [[User:EZ132/std1ib.h]] N https://esolangs.org/w/index.php?oldid=166148 * EZ132 * (+3224) Created page with "'''<code>std1ib.h</code>''' is the header file that defines [[User:EZ132/Not C++|Not C++]]. <pre> #include <iostream> #include <string> #include <utility> #include <vector> #include <iterator> #include <stdlib.h> // delimiters & blocks #define def #define def
20:24:05 <esolangs> [[User:EZ132/Not C++]] N https://esolangs.org/w/index.php?oldid=166149 * EZ132 * (+3241) Created page with "'''Not C++''' (name provisional) is a programming language that is not [[C++]]. It can be compiled trivially into C++. ==Design & History== Not C++ is essentially C++ modified with a header file currently referred to as <code>std1ib.h</code>. This header consis
20:37:02 <Sgeo> Many languages have ternary. Algol-68 has abbreviated if elif else chains:
20:37:03 <Sgeo> INT p = (c="a"|1|:c="h"|2|:c="q"|3|4)
20:37:30 <Sgeo> Hmm I guess ternary can be used similarly anyway depending on precedence
20:37:49 -!- joast has joined.
20:50:31 -!- ais523 has joined.
20:51:36 <ais523> sorear: hi! I haven't Internet-seen you in ages
21:04:03 <ais523> <strerror> (CIM = Compute-in-memory, which adds a few extra word lines to a DRAM circuit to do elementary logic operations across a row, which typically has 64K bits or more.) ← now I'm imagining a very big embarrassingly-parallel vector calculation running across DRAM refresh cycles
21:04:34 <ais523> hmm, the simplest version of this would be a mass zero in which you can tell the memory controller "please zero this block of memory for me" – that would probably be useful even on its own
21:06:04 <ais523> I still remember the discussions about background zeroing of non-allocated memory using, effectively, the kernel idle process (Linux doesn't do it because of cache pollution, although there have been discussions about doing it using nontemporal writes)
21:06:20 <ais523> but just having the memory do it effectively instantly would bypass all those issues
21:07:18 <ais523> <strerror> korvo: I prefer to say that GPUs aren't fast, it's the von Neumann chips that are plodding along ← it makes more sense to think of speed in terms of latency and throughput rather than as a single figure: GPUs have massive throughput but aren't very good at latency
21:14:11 <zzo38> I had thought of computer design in many ways, and I also thought that it should avoid out of order execution, in the ways that is mentioned (and also to possibly make it simpler by not implementing out of order execution; the compiler can (hopefully) set up the order properly). I did not consider CIM but it also has some uses
21:16:30 -!- salpynx has joined.
21:18:08 <salpynx> IMO the Basic Stack TC proof is basically correct. int-e already pointed out the problems with it: 1) misses the data string setup (trivial to do with `push 1`, and obviously required for the rest to work, use `goto 2` for the loop) 2) Technically is using CT not BCT. The table is simple-translation of CT into BCT into Basic Stack, 3) the `istop;stop` is redundant on the 1x commands, but doesn't break anything. Other than that, it seems a valid idea. I
21:18:09 <salpynx> had a play with the interpreter, and with an initial data string, it runs BCT examples with deletion replaced with a moving pointer, so functionally equivalent. It feels like it was designed for this.
21:18:45 <salpynx> int-e: The BCT subtlety is a good observation. I worry I may have this mistake in the past. At first I couldn't see why it might be useful, but it looks like the effect is running one set of productions once, then looping on the offset productions, which could be useful for some clever run-once setup code.
21:18:46 <b_jonas> ais523: could you just ask the GPU to do zeroing? or maybe CPUs could add background zeroing logic at the L3 cache?
21:19:47 <ais523> salpynx: I don't think it correctly implements a queue, the "push reg" command is intended to dequeue a queue but it pushes the address of the element it's dequeuing (with no way to dereference it), not the element itself
21:20:29 <int-e> ais523: it increments `reg`
21:20:39 <ais523> b_jonas: I'm not sure what the situation with GPUs accessing CPU memory is like at the moment – it may vary a lot based on the motherboard (I know that some computers make it efficient but most don't)
21:20:40 <int-e> which points to the start of the queue on the stack
21:20:51 <ais523> int-e: yes, reg is a pointer to the start of the queue on the stack, but the language has no way to read through the pointer
21:21:30 <int-e> ais523: condr does that
21:21:31 <ais523> knowing where the front of a queue is is not enough to be able to dequeue and branch on the dequeued element, you need to be able to actually read the element in question
21:21:59 <salpynx> there is a `push 1` `push 0` which works for the CT emulation
21:22:01 <ais523> int-e: ah, you're right – that was the bit I was missing
21:22:16 <ais523> it looks like that instruction was added specifically to make it non-bignum TC?
21:23:55 <int-e> ais523: yeah, it makes the stack "transparent" as the top of the page puts it
21:24:37 <salpynx> ais523: I wasn't sure what bit you were missing, but sounds like int-e revealed it :)
21:25:41 <salpynx> int-e: your comments made me think that "Binary Encoded Cyclic Tag" _is_ a useful thing if 2 symbol encoding is the constraint. That makes something like 101001 valid BCT but a syntax error in "Binary Encoded Cyclic Tag".
21:26:02 <salpynx> Failing on e.g. 101001 might be a common gotcha for BCT interpreters (if anything about BCT interpreters can ever be called 'common'). Something to test, like Deadfish 256 handling.
21:26:45 <int-e> salpynx: The thing is that the first 0 in a program synchronizes everything so the feature is of very limited use.
21:27:10 <int-e> salpynx: it's more of a wart ;)
21:27:20 <salpynx> The setup / init code possibility is interesting
21:27:38 <ais523> I dislike the way that bitwise cyclic tag became the default, a much better option is "cyclic tag and invent your own syntax for it"
21:27:47 <salpynx> A simple example shows this kind of behaviour : BCT: 101001 = 10 10 0 (11 0 10 0)* , in CT: 0 0; (1; 0;)* (apologies for ad-hoc mixed notation, hopefully it's esotericly clear enough)
21:27:50 <ais523> (for TCness proofs, at least)
21:29:23 <salpynx> I think the wiki page contributes to that problem, BCT is explained in more detail, and has clearer examples. I've used that, and am probably guilty of defaulting to BCT numerous times in the past
21:29:33 <ais523> cyclic tag effectively having three symbols is awkward sometimes, but BCT doesn't really fix that problem
21:30:02 <ais523> (this was the major motivation behind inventing https://esolangs.org/wiki/Echo_Tag https://esolangs.org/wiki/Grill_Tag, which each genuinely can be expressed using two symbols)
21:30:09 <ais523> *Echo Tag and Grill Tag
21:30:42 -!- vista_user has joined.
21:32:29 <salpynx> The ; in CT is like a newline, if you think of the code as a finite list of 2 symbol productions, and deletion occurs by default as part of the process
21:36:20 <salpynx> Hm, Echo Tag is categorized as 'unimplemented'. That might be a fun one to do.
21:36:33 <ais523> when talking to people who don't know how cyclic tag works already, I usually explain it as a program formed of "pop the top element, then push this string if the popped element wasn't 0"
21:37:04 <ais523> Echo Tag's a bit weird because it's been manually compiled into a lot but I'm not sure that there's an automated compiler yet
21:37:10 <ais523> err, manually compiled from a lot
21:37:38 <salpynx> Has it been used in a TC proof for something else?
21:38:58 <ais523> base 10 Addition Automaton, at least
21:39:04 <ais523> https://esolangs.org/wiki/Addition_Automaton
21:41:01 <salpynx> that's a totally new one to me, I'd a least recognised the names of the other * Tags
21:42:52 <salpynx> the numeric output is visually interesting, you can see the structure in the digits. nice.
21:43:17 <ais523> the way I think about it is that almost all TC languages can trivially emulate either a counter machine or a tag system, and so making TC proofs easier is mostly accomplished by making easier-to-implement counter machines and easier-to-implement tag systems
21:47:14 <salpynx> I've always felt that there is a lack of confirmation example programs in tag systems or counter-machine to concretely verify a conversion.
21:47:35 <ais523> part of the issue is that natively written tag is incredibly slow
21:47:37 <salpynx> The BCT wiki page example gets used a lot , I've used it and someone else did recently
21:47:47 <ais523> so you need an optimising interpreter to be able to run it
21:49:21 -!- tromp has joined.
21:49:25 <salpynx> I'm pretty sure I've written a 'hello world' in 2 reg Minsky machine and was going to figure out how to make an optimising interpreter to let it complete
21:50:20 <salpynx> I got distracted by the various MM notations, and how they weren't quite set up for 2-reg
21:51:52 <salpynx> That's right, I convinced myself PMMN was not TC for 2 registers, then decided it was, but not in the obvious way
21:52:13 <esolangs> [[Bitwise Cyclic Tag]] https://esolangs.org/w/index.php?diff=166150&oldid=101531 * Ais523 * (+146) /* Example (Collatz sequence) */ credit where this example comes from
21:52:32 <ais523> that example is used so much we should properly credit it to the original author
22:00:07 <salpynx> For cyclic tag examples I created this BASIC inspired fantasy console idea with an data-string output encoding: https://esolangs.org/wiki/CTBASIC and Tektronix 4010 graphical output for a retro vibe
22:00:24 <salpynx> Not sure I've written it up well enough to do it justice
22:02:25 <salpynx> There's a pre-calculated rotating cube example that runs using cyclic tag .... it's just output but it cycles over distinct animation frames
22:03:27 <ais523> (very) recently I've been interested in the question of compilations that run quickly in naive tag interpreters
22:03:29 <salpynx> I never quite figured out how to do more complex arbitrary conditional branching in CT
22:04:12 <ais523> I think running a program at a speed that's n log(n) slower than the original is possible (I have a sketch proof at https://esolangs.org/wiki/Globe but the details of both halves are missing)
22:04:41 <salpynx> do you mean finding useful algorithms that run well in tag systems, or something else?
22:04:52 <ais523> a compilation scheme, e.g. Turing machine to tag system
22:05:00 <ais523> which doesn't lose any more performance than necessary
22:05:20 <ais523> almost all tag system TCness proofs go via counter machines and store the counters exponentially, so you get a double-exponential slowdown
22:05:55 <ais523> (although one of the exponentials is fairly to remove with an optimising interpreter)
22:06:09 <vista_user> nice to see another user in the wiki wjho likes basic tho
22:06:52 <salpynx> I guess that's what I was trying to figure out with CTBASIC, how to implement higher level programming concepts (mostly)directly.
22:07:01 <ais523> BASIC was my first programming language
22:10:32 <vista_user> ais523: same...well technicakly it was batch but only dir and cd, as a language i actually coded in itwas basic (and a bunch of hopping on python for like 3 days then leaving it for 3 months then back then out ad nauseam))
22:11:59 <vista_user> blame me being too busy doing weird shit in a c64 emulator i got just for the games and ended up using for peek and poke shitfsckery to even bother with python for a while
22:12:25 <salpynx> Getting more direct high level effects in tag systems tends to blow up the number of productions required, that seems to be the trade off. They can be easily generated following simple rules, but they take up space.
22:22:31 -!- ais523 has quit (Ping timeout: 256 seconds).
22:22:49 -!- ais523 has joined.
22:35:11 -!- vista_user has quit (Remote host closed the connection).
22:37:53 -!- ajal has joined.
22:38:27 -!- amby has quit (Remote host closed the connection).
22:38:27 -!- salpynx has quit (Remote host closed the connection).
22:57:03 <esolangs> [[User:Quito0567]] https://esolangs.org/w/index.php?diff=166151&oldid=154435 * Quito0567 * (+18)
22:57:32 <esolangs> [[User:Quito0567]] https://esolangs.org/w/index.php?diff=166152&oldid=166151 * Quito0567 * (+5)
22:57:49 <esolangs> [[User:Quito0567]] https://esolangs.org/w/index.php?diff=166153&oldid=166152 * Quito0567 * (+2)
22:59:30 <esolangs> [[Boomerlang]] https://esolangs.org/w/index.php?diff=166154&oldid=115671 * Quito0567 * (+14)
23:01:31 -!- jgardner has changed hostmask to sid553797@user/meow/jgardner.
23:03:45 -!- tromp has quit (Quit: My iMac has gone to sleep. ZZZzzz…).
23:18:40 -!- Lymia has quit (Quit: zzzz <3).
23:19:07 <esolangs> [[?brainfuck]] https://esolangs.org/w/index.php?diff=166155&oldid=166089 * HyperbolicireworksPen * (+120) changed 5/8,1;/14,2;1/8,3 added infinite series stuff as well
23:19:15 -!- Lymia has joined.
23:20:20 <esolangs> [[?brainfuck]] https://esolangs.org/w/index.php?diff=166156&oldid=166155 * HyperbolicireworksPen * (-1) counted stuff
23:20:58 <esolangs> [[?brainfuck]] https://esolangs.org/w/index.php?diff=166157&oldid=166156 * HyperbolicireworksPen * (-1)
23:21:12 <esolangs> [[?brainfuck]] https://esolangs.org/w/index.php?diff=166158&oldid=166157 * HyperbolicireworksPen * (-1)
23:30:15 <esolangs> [[?brainfuck]] https://esolangs.org/w/index.php?diff=166159&oldid=166158 * HyperbolicireworksPen * (+153)
23:42:50 -!- avih has left.
23:51:52 <korvo> ais523: That's another solid way to look at GPUs, yeah.
23:52:18 <korvo> sorear: Oh hi! Sorry I haven't been on top of that Busy Beaver stuff. Feel free to ping me if I'm blocking progress.