00:00:29 <b_jonas> Mathematica is somewhat like scheme where you can define ordinary functions that take their arguments by value, or macro-like functions that take their arguments unevaluated, and there's a third kind that takes only the first argument unevaluated.
00:00:33 <ais523> the basic problem is that Prolog's closest equivalents to closures/lambdas are super-awkward
00:01:29 <ais523> that said, something like Brachylog could do a while loop pretty easily – I wonder if it has one?
00:04:09 <ais523> looks like it doesn't have one: I suppose it has enough other types of loops that they usually aren't necessary, so nobody really noticed
00:04:40 <b_jonas> you can write all sorts of looping library functions that doesn't require mutable variables, similar to in haskell
00:04:48 <b_jonas> I mean you can do that in prolog
00:04:54 <ais523> actually ⁱ can be viewed as a do-while loop, but a weirdly written one
00:05:15 <ais523> it repeatedly runs the predicate/block it applies to until the rest of the program succeeds
00:05:58 <ais523> (Brachylog doesn't have mutable state even though Prolog does)
00:08:45 <ais523> oh, this discussion reminds me – I have decided that it is usually better for assignment operators to assign rightwards, i.e. 2 → x rather than x := 2
00:09:36 <b_jonas> ais523: is that in languages that are usually formatted right-aligned, so you can scan the more straight right edge of the code for where a variable is assigned to?
00:09:46 <b_jonas> or also in languages that are formatted left-aligned?
00:09:54 <ais523> b_jonas: both, I hadn't thought about alignment
00:09:57 <b_jonas> this matters if you type the code from left to right
00:10:08 <ais523> I can see an argument that you might want to make it clear when each variable is written to
00:10:32 <b_jonas> I want a language that's right-aligned and not just assignements are on the right but the function is usually on the right of its arguments
00:10:49 <b_jonas> or a backwards APL-like where a function is usually after its first argument
00:10:59 <ais523> function after first argument, I agree
00:11:28 <ais523> I think more and more languages are moving in that direction (but not enough of them and not quickly enough)
00:11:28 <b_jonas> this is annoying because you need to modify your editor for it
00:12:51 <ais523> in a golfing language I've been working on, the top level of the program is written in such a way that the output of each statement is implicitly the first argument of the next statement, which isn't explicitly specified
00:13:24 <ais523> (it uses forward-Polish for the remaining arguments, although usually degenerate cases of it)
00:14:41 -!- pool has quit (Read error: Connection reset by peer).
00:15:14 <b_jonas> Enchain is defined to support both a left-aligned and a right-aligned mode, but probably only in the language definition -- since I don't have a right-aligned edition it's unlikely that I'll actually use or implement the right aligned mode. In Enchain arguments can always be either before or after functions. But there's a scoping operator that I haven't talked of yet, which lets you create local
00:15:20 <b_jonas> variables sort of like C scope braces. In left-aligned mode (wimpmode) variables on the column of the opening brace or to the right are local; in right-aligned mode (turtle mode) variables on the column of the opening brace and to the left of it are local. This makes the two modes assymetric, but I think is the right way for braces to work in either case.
00:17:22 <ais523> how does indentation work in a right-aligned language?
00:17:26 <b_jonas> The local variables are both unrelated to any variable in the same column mentioned outside of the pair of braces, and if the braces are in a function body then their lifetime is restricted to the function call but if the function is called multiple times recursively then there's a separate copy of each call stack.
00:18:23 <ais523> plenty of editors can do right-alignment; basically all word processors can, and HTML textarea probably can too
00:18:39 <ais523> and almost certainly Emacs although I'm not sure how to configure it like that
00:20:26 <b_jonas> ais523: yes, but most editors aren't too helpful in editing something like Enchain where the specific columns matter because columns work like variable names, and I think the few editors that have a mode that help there don't do right-aligned across variable line lengths, so the best you can do is edit fixed line lengths then remove the spaces from the beginning of all lines.
00:20:59 <ais523> don't you just add trailing spaces to push a line further to the left?
00:21:20 <b_jonas> also I believe that no word processor supports tabs in right-aligned mode the way Enchain expects them, but of course tabs are optional and this is academic because nobody will write code with tabs because we don't have the tools to edit them.
00:21:35 <b_jonas> ais523: yes, you add trailing spaces to push the line further to the left
00:21:50 <ais523> this seems to work: data:text/html,<html><body><textarea%20style="text-align:right"></textarea>
00:22:20 <ais523> complete with trailing spaces to push lines leftwards
00:22:53 <ais523> it doesn't accept tabs (although shouldn't such a system actually be using backtabs?)
00:23:31 <b_jonas> ais523: even without tabs it's hard: you need overwrite mode so you can change a character without changing other characters in the same line,
00:23:57 <ais523> well, you don't need it, I've done plenty of vertically aligned editing without (but it does help)
00:24:05 <b_jonas> and ideally you also want to be able to move the cursor down to get to the same column of a later line and write something in that line in a way where if the line is too short the editor inserts enough spaces to write in that column
00:24:29 <b_jonas> some editors can do these in left-aligned mode at least, but I don't know if any can do it in right-aligned mode directly
00:26:45 <b_jonas> if you really want then in theory can write programs in a language like enchain on programming paper with a pre-printed grid then transcribe them to punch cards (though Enchain uses ASCII character set so you need a punched card representation for backtick and tilde and caret etc)
00:27:05 <b_jonas> but you'll have fixed-width lines
00:27:46 <b_jonas> left-aligned is easier because a teletype can print variable-length left-aligned lines even if it doesn't have RAM, for printing right-aligned lines you need to buffer a line in RAM before actually printing it
00:28:27 <b_jonas> so there's a fundamental assymetry
00:29:16 <b_jonas> it doesn't matter today because every device today has enough RAM to buffer a line
00:30:02 <ais523> what about using RTL character order?
00:30:16 <ais523> I guess that'd be annoying to type, though, so it'd only be useful as a transmission format
00:31:02 <b_jonas> then you just have the equivalent of a left-aligned language, it's not fundamentally different from an ordinary left-aligned language it's just mirrored
00:32:14 -!- lynndotpy6093627 has joined.
00:33:01 <b_jonas> it's useful if most of your identifiers are in a natural language that's written right to left
00:33:08 -!- perlbot has quit (Ping timeout: 244 seconds).
00:33:12 -!- perlbot_ has joined.
00:33:59 -!- lynndotpy609362 has quit (Ping timeout: 244 seconds).
00:34:00 -!- lynndotpy6093627 has changed nick to lynndotpy609362.
00:34:38 -!- perlbot_ has changed nick to perlbot.
00:40:12 <b_jonas> there's a third direction: you can make an editor mode to write a language backwards compared to how its characters are represented in a file, like an editor that lets you write C code backwards so you type the arguments before the function, or the rvalue before the lvalue in an assignment. it's a bit tricky because you want to type each token forwards and the lexical syntax might not work backwards so
00:40:18 <b_jonas> you may have to type extra token separators sometimes, but it can mostly work. then other programmers and compilers can still read your program the way they're used to but you're typing it in a way that may make more sense.
00:41:23 <b_jonas> of course then comma/semicolon sequencing will be backwards for you, in an imperative program you type first the statement that's run later
00:42:09 <b_jonas> and in languages like C where the order of declarations matter, you'll have to type those backwards too, using a name before you declare it
00:43:02 <b_jonas> of course that's only the default, you can probably still type code in whatever order you want and jump around while editing
00:44:16 <b_jonas> Enchain is imperative where execution normally goes forwards in the same direction as the characters in code,
00:44:41 <b_jonas> modified of course by function calls and gotos.
00:58:30 <esolangs> [[Ima gte. Ima dana/Operation table]] https://esolangs.org/w/index.php?diff=177582&oldid=177550 * BODOKE2801e * (+337)
01:02:29 <esolangs> [[Ima gte. Ima dana/Operation table]] https://esolangs.org/w/index.php?diff=177583&oldid=177582 * BODOKE2801e * (+51)
01:03:32 -!- amby has quit (Quit: so long suckers! i rev up my motorcylce and create a huge cloud of smoke. when the cloud dissipates im lying completely dead on the pavement).
02:08:44 -!- tromp has quit (Ping timeout: 252 seconds).
02:45:08 -!- joast has quit (Quit: Leaving.).
04:29:57 <esolangs> [[Bit-ter lang]] N https://esolangs.org/w/index.php?oldid=177584 * BODOKE2801e * (+883) Created page with "'''Bit-ter lang''' is a [[Esolang]] made by [[User:BODOKE2801e]], it is memoryless and works on binary, and here's a joke about the lang: What is the most bitter lang? Bit-ter lang ==Commands=== ! is input 0 is false 1 is true | outputs the next thing ne
05:35:27 <zzo38> Do you think it could work for a video card to work: Each window has a "frame program" and a "pixel program", which use different instruction sets; only the frame program has flow controls and the ability to write memory, but the pixel program has no flow controls, and there is a limit of how many instructions can be reads. (A window has several other properties as well, such as the colour index mask bits)
05:40:04 <ais523> current video cards are a bit like that but more advanced – I think it would work for a retrocomputing video card but wouldn't be able to compete with the current generation of video cards
05:42:24 <ais523> I'm actually not sure what machine code video cards use internally because none of them let you write it directly, instead you give source code and there's a compiler in the driver
05:43:23 <korvo> zzo38: You want to run the frame programs on the GPU? The pixel programs sound a lot like fragment shaders, where "fragment" is just the GL term for pixel.
05:43:38 <zzo38> (I do not have any desire to do such things as 3D graphics with lighting and that stuff in real time, although I do consider such things as security to be necessary (so that one window's programs cannot read or write the memory or parameters of other windows, unless some memory is assigned to multiple windows).)
05:44:02 <zzo38> korvo: Yes, I did think the pixel program is similar to a fragment shader.
05:44:18 <korvo> ais523: They're just register machines with a basic return stack that allows for some loops and subroutines. There's not much magic. I can point you at some AMD/ATI datasheets if you want to look at ISAs.
05:45:30 <zzo38> (The frame program might be used for such a thing as cursor blinking, although there might be other uses as well; many windows might not need a frame program.)
05:46:11 <ais523> zzo38's design doesn't have a vertex shader, but I think those are primarily useful for 3D graphics, so it makes sense to leave it out if that isn't a goal
05:46:31 <korvo> The only interesting instructions to me compared to other DSPs are DDX and DDY, taking partial derivatives in screen space. These are done by running pixels in a 2x2 grid and taking finite differences. You can take the derivative of any local variable this way, which is kind of cool. This sort of thing is why GPUs can only render in 8x8 or 16x16 tiles. (Also tiled rendering's popular on embedded GPUs.)
05:47:55 <korvo> zzo38: There are some parts of the modern GPU that still work like that. At the extreme end, the hardware cursor's position is usually a pair of MMIO'd registers. I'm on Xorg, so every time I touch the mouse, Xorg gets a SIGIO, handles a USB event, and writes to MMIO.
05:51:03 <zzo38> I might also have sprites (which only display a picture at a position and have no programs associated with it); the mouse cursor is a sprite bound to the root window. And then, also window sets (one of which is displayed at once); my idea for a computer and operating system design will probably need three (one for normal use, one for full screen applications, and one for the system special screen).
05:51:52 <korvo> Yeah, I didn't want to use the word "sprite" in case you weren't in that mindset, but it's the exact same concept.
05:53:04 <korvo> It's worth knowing that the GPU used to have VGA. Like, literally there was a VGA BIOS and special VGA chips; when the card powered on, it would eventually get into a VGA mode. That stopped being the case in the 2010s.
05:54:21 <korvo> Instead today the GPU starts in a kind of emergency mode that *maybe* emulates VGA a bit. The operating system is expected to boot the card. At the best end, Radeons and Intel chipsets require setting power policy and booting the 3D engine; at worst, nVidia famously requires a big opaque licensed blob which includes onboard memory management and a scheduler.
05:55:17 <korvo> ...Sorry, that's an ambiguous phrase. The nVidia drivers have to compile and deliver a blob onto the GPU, a mix of microcode and GPU bytecode, before the GPU can pretend to be VGA or whatever.
05:56:38 <korvo> But this is how zzo38's frame programs might run. The frame program doesn't do any fragment handling directly, but it could still instruct the GPU's memory controller.
05:57:08 <ais523> korvo: I think even nVidia GPUs are able to show bootloader comments during early boot, before the OS (which would contain the GPU driver) has loaded
05:57:39 <ais523> although IIRC nowdays nVidia ships the blob in question on the graphics card itself rather than having the OS do it
05:58:36 <b_jonas> ais523: so it turns out that the rust devs are ahead of you and prepared for identifiers being insensitive to consonant voicing differences, and that's why they named the trait for the modulo operator std::ops::Rem instead of std::ops::Mod, because the latter would collide with std::ops::Not
05:58:47 <korvo> The legendary cancelled Intel GPU board, Larrabee, would have been so cool here. Larrabee was literally about 120 Pentium 3 cores on a PCIe board. x86 is pretty good at describing memory hierarchies at a distance, so we could imagine that a frame program is just one dedicated GPU core which instructs shared memory controllers. This sort of dedicated scheduling would have to happen anyway for hundreds of cores.
05:59:54 <korvo> ais523: Yeah. To facilitate that, nVidia's policy is to open-source a basic boot driver for any operating system that pays them, and also for Linux and BSD as a show of goodwill. The driver, "nv", is full of magic numbers and is basically obfuscated. It *is* legal portable C, I guess.
05:59:58 <ais523> korvo: hmm, don't architectures like Knight's Landing have a lot in common with GPUs?
06:00:36 <korvo> But in the 2010s the GPUs started to change so that they no longer have 2D engines. They also are starting to drop video engines; it's all GPGPU again in the 2030s, I imagine.
06:00:48 <ais523> although GPUs are more pervasively SIMD
06:01:17 <ais523> for really early GPUs I think of things like the NES PPU
06:01:25 <ais523> which was extremely fixed-pipeline
06:01:47 <korvo> ais523: I guess? You hit it precisely; it's SIMD. It's also MIMD in some cases, like programming for the Cell on the Playstation 3.
06:02:16 <korvo> I mean that I don't know much about Knight's Landing. I know a bit about AMD APUs, which are definitely more GPU-like.
06:03:14 <ais523> Wikipedia says 72 cores, 4 threads per core, and it does AVX-512
06:03:24 <ais523> so not really a GPGPU but moving in that sort of direction
06:03:30 <korvo> ais523: Oh, have you heard of "supershaders"? There's this interesting pattern in GL 2 where emulation of GL 1.4 is best done by writing more-or-less the entire GL 1.4 rendering pipeline as a per-fragment process. One shader to rule them all.
06:03:33 -!- chloetax has quit (Ping timeout: 246 seconds).
06:03:35 <ais523> (the cores themselves are just a fairly normal x86 but with weird performance properties)
06:03:58 <ais523> korvo: I've heard of them but have trouble remembering the details
06:03:59 <zzo38> Some of the window parameters might be a tile counter and the horizontal and vertical tile counter divider, to avoid needing multiplication and division for the common case of implementing a tiled screen (a PC text mode emulation would be one example of this). Some parameters (such as these) might be readable and writable by frame programs, while the window position would be a inaccessible parameter.
06:04:06 <ais523> a sort of "compile once run anything" I think?
06:05:02 <korvo> Yeah, basically. The supershader is given a bunch of "uniforms" and "varyings", which are different ways of binding global variables. Also textures are bound in the normal way, TCL (transform, clipping, lighting) is done with standard per-vertex processing, etc.
06:06:38 <korvo> Surprisingly, this is a correct way to do GL 1.4 emulation! It's fast enough. One might think that it's very expensive to send a literal packet of uniforms in the GPU's command buffer, but it's not. The expense is always in binding textures.
06:06:59 <korvo> The Dolphin emulation suite uses supershaders, for a real-world example.
06:07:10 <b_jonas> is this the sort of thign where the architectures evolve for ten more years and suddenly you look at them and you can no longer tell which chip is supposed to be the CPU and which one the GPU because they've become so similar?
06:07:30 <ais523> it's basically the GPU version of an interpreter, by the sound of it
06:08:02 <ais523> there are always going to be programs that parallelize poorly
06:08:14 <korvo> Oh, they didn't call them that. https://dolphin-emu.org/blog/2017/07/30/ubershaders/
06:08:37 <ais523> one program I'm working on is CPU-bound and embarassingly parallel but it can't reasonably make use of SIMD because it involves a lot of 64×64=128 multiplications
06:08:46 <ais523> x86-64 has a builtin for that but it only works on scalars
06:09:22 <ais523> and I think vectorising the non-multiplication bits would cost more in moving bytes around than it would gain in parallelised arithmetic
06:09:27 <korvo> GPU drivers do not have good shader compilers. They have, at best, a slightly outdated copy of LLVM. I think that if somebody wants to wield libdrm directly, they could probably just emit their own bytecode. This is what makes shaders expensive to load.
06:10:04 <korvo> The GPU doesn't really need to context switch. The GPU's expenses are all about updating memory: caches, mappings, DMA'd buffers.
06:11:19 <b_jonas> korvo: is it actually possible to emit your own bytecode and send it to the CPU in the sense that an operating system can let a user program do that and you can't use that to elevate permissions?
06:11:20 <korvo> b_jonas: Maybe one cursed part of this is subinterpreters. Like ais523 says, at some level this is about embedding an interpreter into the chip. x86 has an interpreter for x87, for example. amd64 has an interpreter for SSE.
06:11:30 <ais523> well, GPUs do need to context switch precisely because updating memory is slow, so they swap threads out while they're in the middle of a memory load
06:11:48 <korvo> b_jonas: Yes! Moreover, I can try to help you with that, and there's also an entire IRC channel (#dri-devel, they used to be called?) that can help.
06:12:23 <b_jonas> not at the moment, but good to know, thank you
06:13:10 <korvo> https://bpa.st/GWSRC here's ls on my /dev/dri. libdrm boils down to open() and ioctl(). Here you can see that if you have `video` or `render` group then you can do as you like.
06:14:06 <korvo> There is also a concept of DRM master and VGA arbiter. (Bless kernel folks for these names.) DRM master is a userspace process that gets early priority over the screen and preempts all other DRM/DRI clients; that's usually Xorg or Wayland, to give you an idea of what sorts of responsibilities you'd have.
06:14:47 <korvo> VGA arbiter is the idea that VGA BIOS comes with a lot of global state, so if you have two VGA adapters on your system then you need a global switch between them. This usually doesn't matter, right up until it does.
06:15:56 <korvo> Sorry if I'm overeager. I'm drugged and sleep-deprived, but more importantly the GPU community is always starved for developers. There's only like a few hundred of us and we're really just compiler engineers with the patience to hack kernel and reboot the machine if we lock up the PCI bus by crashing the GPU.
06:19:32 <b_jonas> ais523: are they 64 bit × 64 bit multiplications where you care about most of the 128 bits of the result?
06:19:51 <ais523> when I taught GPU programming, the GPUs were somehow able to self-recover if they were crashed (but the sort of crashes we're talking about are null-dereferences and the like which are probably easy to recover from)
06:19:56 <korvo> b_jonas: Oh! That was the shot, here's the chaser: the kernel has to *parse* userspace's submitted command buffers. The kernel's got a memory manager for GPU objects (okay, technically, it has two and a half GPU memory managers) and it will change your buffers to point to the right objects on the GPU for you. Hope the kernel doesn't have any parser bugs!
06:20:36 <korvo> Oh, also, hope the kernel doesn't insert fences wrongly. Or hope you got your fences right. Or hope that the fences are just slow and not misrendering. etc.
06:21:55 <ais523> I think GPU programming is the only platform on which I've seen programmers encouraged to omit logically required fences on the basis that the hardware will automatically have enough fencing for the program to still work
06:22:21 <ais523> presumably the optimiser knew enough about the technique to not mess with the fencing invariants itself, before the hardware saw it
06:23:14 <korvo> Yeah. It's remarkable that, in GL, we need to use an extension just to get calloc() for GPU memory. This property is called "robustness" and in the 2000s it basically didn't exist; you could read Somebody Else's Framebuffer just by, like, mmap() and read().
06:24:23 <korvo> To be fair, glClearBuffer() is really expensive if you're robust by default. Up until then, clearing a buffer was done by enqueing a draw command for a big black rect; the threat model wasn't there yet.
06:24:33 <ais523> thinking about it, it shouldn't be too hard for a GPU to tell the OS kernel "I crashed running thread X, please recreate the graphics environment without the program that did that"
06:25:13 <ais523> non-crashing overwrites of other threads' data would be harder to deal with
06:25:32 <ais523> but you could use an MMU for that just like CPUs do (I suspect GPUs have MMUs nowadays even though they originally didn't)
06:25:47 <b_jonas> korvo: and then your browser grows an extension to expose all that low-level stuff to websites, not just the high-level GL interface
06:26:15 <ais523> that said, I would expect them not to have traditional TLBs and am not sure what they do instead (possibly some sort of manually loaded TLB?)
06:26:49 <zzo38> With my idea of how I would do it, effectively the GPU could not crash, and if one window has errors that prevent it from working, that does not affect any other windows (except possibly those that share memory with it, although I expect it would probably not affect those either)
06:27:15 <ais523> oh right, I don't think I've told anyone how much I hate the name TLB yet
06:27:41 <ais523> it's one of those names that makes no sense without an explanation, and then the explanation is just justifying the name, it doesn't help to make it a name that actually makes sense
06:27:47 <b_jonas> ais523: couldn't they require the programs to use physical address pointers, so they can't freely choose addresses when they mmap, and the memory processor tracks which task can access each physical page?
06:28:08 <ais523> b_jonas: that's possible but I don't think it has advantages over an MMU
06:28:17 <ais523> you still need to check for permissions, you may as well do page-mapping in the process
06:28:51 <ais523> I guess the pagetables would be smaller, meaning that you could maybe have fewer levels?
06:29:41 <b_jonas> surely it has advantages over a an MMU that does address translation! the address translation with pages as small as 4 kilobytes is a large part of what makes the caches in CPUs so hard to optimize!
06:29:42 <ais523> now I'm reminded of the way that some CPU architectures raise interrupts to ask the kernel to manually fill in the TLB, rather than pagewalking on their own
06:30:00 <b_jonas> if you support only larger pages then it's less of a problem, but it's still complicated to support correctly
06:30:08 <korvo> ais523: Not to explain to the professor how PCI works, but the way I think of it is that PCIe has limited bandwidth. The GPU's memory controller mostly has to make scheduling decisions about what to DMA next; it sees what's upcoming in the command queue, to give you an idea of how deep the decoding pipeline gets.
06:31:03 <b_jonas> in CPUs the problem is that the CPU wants to use information from the L1 cache before it knows for sure that the translated address for the cache line matches the reqested address, and then has to be able to quickly change its mind if it turns out that the L1 cache hit was fake and it has to use a value from the L2 cache.
06:31:21 <korvo> I'm told that this is the main reason that nouveau doesn't just have a full disassembly of the nVidia microcode. If it were that easy then they'd have done it, like people have done with x86 microcodes. But the nVidia blob contains a scheduler for the memory controller, or the moral equivalent.
06:31:55 <b_jonas> the L1 cache wants to be very low latency so the translated address is available *almost* too late. the address translation is on a critical path
06:31:58 <ais523> I was working one level higher than that, you can definitely say "please DMA this memory" in GPU source code (not in that many words but with the same effect), but I was just working at the level of "there will be some delay if you do this" and at my level of abstraction the details of the pipeline didn't matter
06:32:26 <b_jonas> moreover, the L1 cache can't go larger than 8 times the page size, so we're stuck with the same L1 cache size in the best CPUs for decades
06:32:58 <korvo> b_jonas: I like that description because it now makes me wonder whether the TLB is yet another skeuomorphism. Like, was there a person in the days of the telegraph or switched telephone that had a little side table, and on the table was a big book of addresses, and the operator had to physically look at the side table...
06:33:26 <ais523> b_jonas: maybe the solution here is some sort of noalias caching, in the sense that you just ban accessing memory that would be appear to be a cache hit but actually isn't
06:33:41 <korvo> Wait, L1 cache is based on *page* size? Is this an x86 detail?
06:33:51 <esolangs> [[Esolang:Introduce yourself]] https://esolangs.org/w/index.php?diff=177585&oldid=177566 * EsolangerII * (+57) /* Introductions */
06:34:15 <ais523> GPUs have two sorts of memory, one of which works a lot like CPU memory but is typically read-only, and the other more GPU-specific one which is read-write but the caches are loaded manually
06:34:16 <korvo> I thought L1 was based on how much RAM could be physically made available next to the fetch unit inside each core?
06:34:19 <esolangs> [[Esolang:Introduce yourself]] https://esolangs.org/w/index.php?diff=177586&oldid=177585 * EsolangerII * (+10) /* Introductions */
06:34:57 <ais523> if you're loading the caches manually anyway, simply saying "don't do cache collisions" is something you can actually do
06:35:41 <ais523> oh! I think I know how GPUs could do MMU-like behaviour: you fix the addresses in GPUspace that each thread can access (you have to do that anyway so that they can act in parallel with each other) and you do the permission checks only when you're copying into and out of the cache-equivalent
06:36:03 <ais523> you don't need to cache the address translations because you're only ever using them as part of a slow operation anyway
06:36:32 <esolangs> [[Esolang:Introduce yourself]] https://esolangs.org/w/index.php?diff=177587&oldid=177586 * EsolangerII * (+5) /* Introductions */
06:37:05 <esolangs> [[Esolang:Introduce yourself]] https://esolangs.org/w/index.php?diff=177588&oldid=177587 * EsolangerII * (+10) /* Introductions */
06:37:58 <korvo> Yeah. I should point out that word sizes are bigger on the GPU; the fetches are like 256 bits minimum. This is hidden at the ISA level behind the 2x2 abstraction I mentioned earlier. IIRC nVidia docs talk about "warps", as in looms.
06:38:25 <ais523> the Rust developers had a spate of GPU terminology trouble recently
06:38:42 <ais523> because they wanted to create a GPU-agnostic API but all the GPUs were using different names for the same concepts
06:38:52 <ais523> and so it was hard to figure out what names to use in documentation
06:39:17 <ais523> the course I was teaching used nVidia terminology, so we were dealing with warps and half-warps
06:39:32 <ais523> warp, thread, block, kernel as the four main levels of abstraction
06:39:36 <korvo> And they probably wanted to expose low-level control over running warps in parallel. Seductive, the Dark Side is.
06:39:50 <ais523> (this is also the reason I said "OS kernel" above, to clarify that I didn't mean a GPU kernel)
06:40:14 <korvo> Sheesh. And I think nVidia/CUDA "block" is what SGX and other ARM SoCs call "tile".
06:40:34 <esolangs> [[Pastebin]] https://esolangs.org/w/index.php?diff=177589&oldid=144016 * Dragoneater67 * (+62) /* Some random C++ code I found online */
06:41:47 <korvo> The model I would want, because I'm a weenie who hates going fast, is to run a pure function from tensors to tensors. The pure function can vary, it can have uniform params, but it's just a tensor mapping. AIUI this is basically what Futhark offers.
06:42:23 <ais523> anyway, a) all the threads in a block have equal permissions to each other, b) threads generally work entirely in terms of memory that belongs to the block, c) each block has its own address space for block memory
06:42:44 <b_jonas> korvo: I think it's based on page size. Multiple pages in logical address can map to the same physical address, but caching has to use the physical address so that if you write to one logical address and read from the other you get the correct result. There are 64*8 cache lines, each 64 bytes long. You get the lower 12 bits of the address first, because that's the same in logical and physical, and the
06:42:47 <ais523> which means the only time you would need to do an address translation and permission check is when copying between block memory and main memory, and that's normally done kilobytes at a time
06:42:50 <b_jonas> L1D cache can tell which of the 64 groups of 8 cache lines it has to use from that, and later when it learns the physical address it picks one of those 8 to serve, or for a cache miss it has to pick the oldest one of those 8 to flush. I think this would be too complicated to do in time with 16 instead of 8 cache lines in each group.
06:43:19 <ais523> (again, talking about standard GPU memory here, not the CPU-like memory that's more freely addressible)
06:43:23 <b_jonas> I'm not a CPU designer so it's possible that I'm wrong about this and there's some other reason why the L1D cache size doesn't go over 32 kilobytes.
06:43:37 <korvo> b_jonas: That makes perfect sense! Beautifully explained. It might have come out of my electrical engineering textbook.
06:45:08 <esolangs> [[Language list]] https://esolangs.org/w/index.php?diff=177590&oldid=177490 * EsolangerII * (+45) *
06:45:27 <ais523> cache associativity is one of those optimisations where you know theoretical slowpaths exist, but they're unlikely and not actually *incorrect* so you just do the optimisation anyway and hope that the slowpath never gets hit
06:46:04 <ais523> and then people researching low-level CPU behaviour intentionally create a lot of clashing addresses and use the performance changes to measure the cache associativity
06:46:32 <b_jonas> this is 32 kilobytes per CPU core by the way, and there's a separate L1C cache for code which can be separate because if you ever write to a cache line that's used as code then the CPU is allowed to go very slow to recover from that
06:47:13 <ais523> b_jonas: not just that, it's allowed to not notice
06:47:27 <korvo> b_jonas: Hm. So maybe I'm misremembering, and it's L2 cache that is constrained by physics?
06:47:57 <ais523> apparently you need to run one of a few specific instructions to recheck the cache if potentially cached code got written to
06:48:01 <korvo> I do recall that L2 and L3 are explicitly at different levels of sharing and coherence precisely so that L3 can be shared by more cores, which inherently means that it must be (equally) far away from all of its users.
06:48:04 <ais523> (CPUID is one of them, there's a faster one but it's new)
06:48:54 <ais523> my guess is that storing data in L1C evicts it from L1D and vice versa, in much the same way as if a different CPU had written it – it could use the same mechanism
06:49:10 <b_jonas> ais523: but there are also some cases where you can use this knowledge to intentionally create good memory addresses to speed up your code
06:49:49 <b_jonas> ais523: I think on x86 the CPU is required to notice changing code as long as there's a jump instruction between the code change and execution, but maybe this has changed at some point?
06:50:19 <ais523> b_jonas: I think it's something like, if it was changed on the same core a jump is enough, if a different core changed it you need CPUID
06:50:32 <ais523> for recent x86, anyway
06:50:36 <ais523> old x86 didn't need the jump
06:51:02 <b_jonas> oh the jump was needed even in very old x86 (in some cases anyway)
06:51:27 <ais523> IIRC DOS NetHack has a self-modification without a jump
06:51:38 <ais523> at least if running under HDPMI32
06:51:40 <b_jonas> because even the original 8086 can read ahead a few bytes of instructions
06:52:00 <ais523> I remember committing to an emulator so that it would emulate it correctly
06:52:18 <b_jonas> ais523: you could perhaps do it in some way other than a jump, but usually a jump is needed
06:52:34 <b_jonas> why the heck does DOS NetHack do that?
06:52:45 <b_jonas> do you load overlay code without a jump between?
06:53:26 <b_jonas> oh, but isn't that in a context where you modify one byte and it's fine if the CPU only notices it at a later time when it runs the code as long as it's atomic?
06:53:41 <b_jonas> if you do that then no jump is probably fine
06:53:46 <ais523> the standard library wants to provide a function for "call interrupt number X" but the x86 INT instruction can't take the interrupt number from a register, it has to be an immediate
06:54:08 <b_jonas> why would you want to call an interrupt with indirect number?
06:54:09 <ais523> so either you need a jump table with an option for every possible interrupt number or you need self-modifying code
06:54:26 <ais523> you wouldn't have to if you could inline the standard library
06:54:37 <korvo> ais523: So, maybe I'm outdated on this, but ISTR that GPU memory access is dominated by sampling strategy. Usually every pixel in a warp will look up nearly the same texels; they'll be near each other in 2D space or 3D space or whatever. If the texels were Z-tiled or Morton-tiled or etc. then a cache row is more likely to hold multiple nearby texels. This is why nVidia does that to all of their texture memory.
06:54:49 <ais523> but the standard library wants to provide a function for calling interrupts, and it wants to provide one function, not a function for each possible interrupt
06:55:18 <ais523> korvo: we basically didn't use texture memory on the course
06:56:02 <b_jonas> that does sound like you need a jump between modifying the instruction and executing it though
06:56:03 <ais523> of course texture memory is very important in games and graphics programming, but we were doing GPGPU with somewhat predictable/constraint inputs
06:56:05 <korvo> ais523: Good call. It's a headache. It was a big deal in the community when they finally reverse-engineered the GPU's native memory formats.
06:56:27 <ais523> so we could just load all the data we needed into block memory
06:57:12 <ais523> that said, plenty of weird tiling stuff came up in the actual algorithms (GPU matrix multiplication is a classic example of the genre)
06:57:30 <ais523> even when you're manually filling the caches it needs a lot of thought to do that efficiently
06:57:51 <ais523> I'm talking about big matrices here, not the little 4×4 ones which have a builtin
06:58:57 <ais523> the basic challenge is to ensure that each value is only loaded into one block at a time, in order to make the most of your parallelism
06:59:11 <esolangs> [[One Command Programming Language(OCPL)]] N https://esolangs.org/w/index.php?oldid=177591 * EsolangerII * (+503) Created page with "One Command Programming Language is a programming language that uses only one command, !(). If there is one argument, like this, !(1), it will print what is inside. !("Hello, World!") // prints Hello, World. If there are two arguments,
06:59:35 <korvo> Yep. It's a perennial desire. The Weather Channel reportedly paid for the r200 Radeon driver so that they could do weather modeling on those GPUs, despite basically no customizable shaders. I've never seen their code, but I know weather models need lots of linear algebra, so they must have done it somehow.
07:00:21 <ais523> the funny thing is, with the rise of LLMs, I don't think people even use GPUs for workloads that are mostly matrix multiplication any more because nowadays there are specialised chips for that
07:00:30 <ais523> (but GPUs are very good at doing large matrix multiplications)
07:00:48 <korvo> Don't people usually use off-the-shelf algorithms for this? CUDA kernels or whatever?
07:01:13 <korvo> I guess it should come up in a class.
07:02:06 <b_jonas> ais523: don't both the GPU and CPU have parts in them that are specialized for matrix multiplication? like in recent CPUs and GPUs
07:02:34 <ais523> korvo: oh yes, they do – I was teaching the sort of class where you tell the students how the standard library works internally
07:03:01 <ais523> (not directly but you teach the relevant principles)
07:03:50 <ais523> in engineering you care about how to use the tools, in computer science you care about how/why the tool works
07:04:01 <ais523> and this was a computer science course
07:04:14 <b_jonas> maybe not enough parts in those chips are, so specialized chips that are denser are needed for machine learning; and other specialized chips that are 95% SHA-256 computation are needed for bitcoin mining
07:04:27 <ais523> b_jonas: some very recent Intel CPUs have matrix multiplication operations, yes
07:04:56 <ais523> I am surprised by this, it's the sort of thing you would expect to delegate to a different type of processor instead if you're doing more than a trivial amount, so it strikes me as mostly a marketing thing
07:05:17 <b_jonas> ais523: even the not very recent ones are optimized for the execution units have really high throughput for executing mostly floating point multiply and add instructions
07:05:35 <ais523> b_jonas: are you talking about Intel CPUs?
07:05:43 <ais523> they do have a surprising amount of FMA units
07:05:52 <ais523> to the extent that I think normal multiplication is implemented as FMA of -0
07:06:16 <b_jonas> oh it's definitely partly a marketing thing
07:06:45 <ais523> in general I think Intel has problems trying to persuade people to upgrade to newer chips
07:06:58 <ais523> and they keep inventing bizarre features because of that
07:07:22 <ais523> (also there's the persistent historical situation of "Intel specifies a new feature but only AMD implements it")
07:07:22 <korvo> We were talking earlier about how to actually get compilers to emit FMAs: https://lobste.rs/s/bunmdv/faster_asin_was_hiding_plain_sight
07:08:00 <ais523> FMA is really awkward from the programmer's point of view because if you request an FMA but the hardware doesn't have one you get a slow fallback
07:08:13 <ais523> and if you don't request an FMA the compiler can't normally use it due to excessive rounding
07:08:21 <ais523> maybe there should be a maybe_fma or the like that gives the compiler a choice
07:08:29 <b_jonas> don't we have a C pragma specifically for that?
07:09:20 <ais523> there's also the practical problem that not all x86-64 CPUs support an FMA instruction
07:09:47 <ais523> and many programmers are unwilling to have their program not be able to run on older CPUs, but switching between different instructions at runtime has its own issues
07:09:49 <b_jonas> `#pragma STDC FP_CONTRACT ON` then you write a multiplication and addition in your code and the compiler is allowed to emit an fma
07:09:52 <HackEso> #pragma? No such file or directory
07:10:26 <b_jonas> and before that there was a compiler flag
07:11:01 <ais523> and of course the silly incident where Intel and AMD each specified FMA instructions and then each implemented the others' specification
07:11:11 <korvo> In some languages there's mixfix ops with two parts. `b ? x : y` for example. In E, modular exponentiation is mixfix, `b ^ e % m` or so. It would be nice if FMA could arise from a standardized mixfix `a * x + b`.
07:11:26 <ais523> (they're synchronized again now, on the version originally specified by AMD and implemented by Intel)
07:11:57 <korvo> (This is probably the biggest GPU programming influence on Monte! It doesn't guarantee FMA but has syntax set up for it.)
07:12:05 <ais523> korvo: I've been increasingly thinking that FPU code should have special "rounding parentheses" that show where the rounding goes
07:12:19 <ais523> err, floating point in general, not FPU specifically
07:13:30 <b_jonas> ais523: yes, but I think all the FMA instruction thing was before it turned out that both Intel and AMD CPUs have multiple kinds of speculative execution vulnerabilities, and then everyone upgraded just to be sure that their CPU doesn't have them
07:13:50 <b_jonas> kind of unfortunate but that should have solved the FMA problem by now
07:14:07 <ais523> b_jonas: that doesn't really help because there are almost certainly lots of undiscovered such vulnerabilities
07:14:48 <ais523> anyway, I mostly stopped thinking about this topic because when I do I start thinking about how to do a fused add-add, which sounds easy but is harder than it seems to do correctly
07:15:41 <b_jonas> yeah, the remaining ones are the hard ones that the CPU makers can't fix because they require the compiler writers and low level library writers to collaborate
07:16:44 <ais523> I don't think it's possible to make a confident statement about the remaining ones
07:17:07 <ais523> given the history there's almost certainly going to be at least one subtle one that's extremely hard to fix, and at least one stupid oversight
07:18:29 <ais523> actually I think even a straightforward Spectre v1, "bounds check / read from array / indirect read with an address calculated based on the read value", hasn't been fully fixed yet
07:18:31 <b_jonas> some of them aren't specifically speculative execution but other side channel leaks
07:19:37 <ais523> even the class of "covert channel from speculatively running code to non-speculatively running code" is likely not fully explored yet
07:20:40 <ais523> now I'm thinking of that amazing Spectre v2 variant where the processor was tricked into predicting a branch from an instruction that wasn't actually a branch instruction
07:21:22 <b_jonas> yeah, you're probably right, there's too many side channel leaks to fix all of them easily
07:22:26 <b_jonas> also I should ask #esolangs my cryptography question some time
07:22:54 <b_jonas> but it's not something I can do justice to in just a few lines
07:23:26 <ais523> I do like the generic fix of ensuring that programs are deterministic, which prevents them translating side channels or covert channels into non-side-channel behaviour (but doesn't prevent them taking data from a covert channel and outputting it via a side channel)
07:23:43 <ais523> the hard part is removing the primary externally visible side channel, which is timing
07:24:59 <ais523> come to think of it, this is essentially the same problem that we have in bridge tournaments: in bridge, each contestant is a pair of humans who are not allowed to communicate except via the moves they make
07:25:14 <b_jonas> the other "generic fix" is to never run multiple programs that don't trust each other on the same hardware
07:25:48 <ais523> (each person has partial information – the game is about trying to make moves that give your partner enough information to make good moves of their own, whilst ensuring that your own move isn't too bad)
07:25:49 <b_jonas> true, bridge does try to solve that
07:26:01 <ais523> playing online blocks almost all the side channels, but not timing
07:26:30 -!- Sgeo has quit (Read error: Connection reset by peer).
07:26:38 <ais523> I think the solution here might be a fixed time limit per move, but players like to be able to think as long as they like, like in chess…
07:28:13 <b_jonas> yeah, you have to make bridge teams submit a computer program that plays their strategy, and then impose a time limit on each step when that program executes, to get around that
07:28:31 <ais523> this is extremely hard
07:28:35 <b_jonas> which is sort of what they're trying to impose on bridge but it's not that formal
07:28:38 <ais523> just explaining human systems to a computer is difficult
07:28:47 <ais523> (explaining them to a human is also difficult, but easier)
07:30:03 <ais523> current bridge software is really bad at communicating with its partner, when it does well it's primarily through not making thinkos and through being able to work out complex lines of play when it has full information
07:30:58 <korvo> The last time I read about a bridge scandal, it was — and sorry in advance for getting the terminology wrong — a side channel via the return box where discarded cards are placed?
07:31:22 <b_jonas> does that apply only to bridge software that wants to communnicate with a human partner, or also bridge software that plays a team?
07:31:36 <korvo> It was something remarkably subtle like one of four choices of corner, and it wasn't just sending something obvious like a suit or rank.
07:32:04 <ais523> korvo: there were two scandals that that might have been, but only one was that subtle
07:32:24 <ais523> discarded cards in bridge are played like non-discarded cards, just the players have to remember they have no value
07:32:46 <ais523> but bridge has two phases, the bidding and the play
07:33:15 <ais523> and there was something complicated about placement of the tray that was used to pass the information about the bidding from one partner to the other, IIRC
07:33:26 <b_jonas> I think the only reason why bridge mostly works and doesn't have these standards is that it's mostly people who already want to keep the information hygiene rules want to play it, especially british people.
07:33:59 <ais523> b_jonas: I'm primarily concerned about people who are consciously honest but subconsciously pick up information they aren't entitled to
07:34:30 <b_jonas> ah, like CPUs that don't want to deliberately leak information on a side channel?
07:34:35 <esolangs> [[One Command Programming Language(OCPL)]] https://esolangs.org/w/index.php?diff=177592&oldid=177591 * EsolangerII * (+84)
07:34:42 <ais523> e.g. if you can see your partner as you play it is too easy to pick up their emotions from body language, so serious tournaments have a barrier across the table and use trays to pass information back and forth
07:34:58 <korvo> ais523: That sounds like the one. I saw a video of tourney play so that they could show what the tray ought to look like, and it felt very solemn. I'm guessing that that's just the tourney atmosphere for a game where sharing knowledge is forbidden?
07:35:16 <ais523> it's only done in important tournaments and normally only in the last few rounds
07:35:32 <ais523> normally (when playing in person) you just get the four people sitting round a table without many precautions
07:35:40 <ais523> but I don't like that because of how much unauthorised information it creates
07:38:07 <korvo> Makes sense. In this part of the USA, the contract game we usually play is whist, but much more common is the non-bidding game of hearts. Hearts is a perfect-play game, or however you call it; it's not interesting professionally because it's all down to which hand you're dealt.
07:39:07 <ais523> well, hearts is theoretically complicated because you have multiple opponents who are not allied with each other
07:39:34 <ais523> I would expect it to be similar to poker in that it can be broken by collusion
07:39:44 <korvo> Yeah. But there's a bit of game theory, so even if you're not allied, you get to bet against -- exactly!
07:40:09 <korvo> We also play lots of poker and blackjack for fun. Same idea. I guess we like bluffing games.
07:40:42 <b_jonas> there are card games with bidding on tricks where everyone bids simultaneously, but I think those can be broken with collusion too
07:43:36 <b_jonas> there's also at least one competitive trick-taking card game with some limited bidding that has just two players, that's kind of the easy way to get around these problems
07:43:50 <b_jonas> but it's more boring than the game with three or more players
07:44:36 -!- chloetax has joined.
07:45:01 <korvo> It is a dark and stormy night. I'm going to bed. Peace.
07:46:39 <esolangs> [[One Command Programming Language(OCPL)]] https://esolangs.org/w/index.php?diff=177593&oldid=177592 * EsolangerII * (+120)
07:58:01 <esolangs> [[One Command Programming Language(OCPL)]] https://esolangs.org/w/index.php?diff=177594&oldid=177593 * EsolangerII * (+218)
07:59:54 <ais523> korvo (for when you wake up): I think this is the video you were thinking of: https://www.youtube.com/watch?v=831tJ4EHLBY
08:01:00 <ais523> I was almost right, they weren't signalling using the tray, but using the board that's used to hold the cards when carrying them between tables (nowadays, bridge tournaments are usually scored by comparing the play of the same deal at multiple tables, so you need to ensure that each table has players get the same cards and that's done by using a board that holds the four hands separately)
09:20:12 -!- chloetax has quit (Ping timeout: 264 seconds).
09:27:51 <ais523> I looked at the x86 emulator code I wrote to handle self-modifying code – it worked by simulating a no-op interrupt if memory was modified that could be in code cache (thus causing the code to be re-recompiled after the interrupt was handled)
09:28:07 <ais523> a real processor could use the same method (and probably does do something similar)
09:45:39 <esolangs> [[Talk:]] N https://esolangs.org/w/index.php?oldid=177595 * C++DSUCKER * (+43) Created page with "This esolang is absolutely AWWESOME!!!!! :D"
09:45:51 <esolangs> [[Talk:]] https://esolangs.org/w/index.php?diff=177596&oldid=177595 * C++DSUCKER * (+27)
09:46:05 <esolangs> [[Talk:]] https://esolangs.org/w/index.php?diff=177597&oldid=177596 * C++DSUCKER * (+1)
09:46:19 <esolangs> [[Talk:]] https://esolangs.org/w/index.php?diff=177598&oldid=177597 * C++DSUCKER * (+32)
09:46:42 <esolangs> [[Talk:]] https://esolangs.org/w/index.php?diff=177599&oldid=177598 * C++DSUCKER * (+27)
09:46:58 <esolangs> [[Talk:]] M https://esolangs.org/w/index.php?diff=177600&oldid=177599 * C++DSUCKER * (-1)
09:54:06 <esolangs> [[Ring-around-the-Rosie]] https://esolangs.org/w/index.php?diff=177601&oldid=175150 * Salpynx * (+4482) /* Examples */ 99 bottles for 1 reg Minsky machine
09:55:23 <esolangs> [[Ring-around-the-Rosie]] M https://esolangs.org/w/index.php?diff=177602&oldid=177601 * Salpynx * (+25) /* Computational class */ implemented for testing evaluation strategies
10:31:02 -!- ais523 has quit (Quit: quit).
10:36:45 <esolangs> [[]] https://esolangs.org/w/index.php?diff=177603&oldid=177556 * Qpx5997 * (+394)
10:43:04 <esolangs> [[]] https://esolangs.org/w/index.php?diff=177604&oldid=177603 * Qpx5997 * (+333) /* Syntax */
10:54:32 <esolangs> [[]] https://esolangs.org/w/index.php?diff=177605&oldid=177604 * Qpx5997 * (+812)
11:00:34 <esolangs> [[One Command Programming Language(OCPL)]] https://esolangs.org/w/index.php?diff=177606&oldid=177594 * EsolangerII * (+55)
11:01:12 <esolangs> [[]] https://esolangs.org/w/index.php?diff=177607&oldid=177605 * Qpx5997 * (+76) /* Commands */
11:02:44 <esolangs> [[One Command Programming Language(OCPL)]] M https://esolangs.org/w/index.php?diff=177608&oldid=177606 * EsolangerII * (+0)
11:03:04 <esolangs> [[One Command Programming Language(OCPL)]] https://esolangs.org/w/index.php?diff=177609&oldid=177608 * EsolangerII * (+5)
11:08:47 <esolangs> [[]] https://esolangs.org/w/index.php?diff=177610&oldid=177607 * Qpx5997 * (+58)
11:58:41 <esolangs> [[Template:Unf]] https://esolangs.org/w/index.php?diff=177611&oldid=177551 * None1 * (-36) Blanked the page
12:07:31 <int-e> Wtf, how does evince keep getting worse?! Can't scroll up with cursor keys anymore... it works once, but also selects the zoom input field.
12:08:10 <int-e> (well, maybe it's a recent GTK change)
12:18:32 <esolangs> [[ChangeFuck]] https://esolangs.org/w/index.php?diff=177612&oldid=177562 * None1 * (+920)
12:21:09 -!- amby has joined.
12:25:50 -!- ajal has joined.
12:30:05 -!- amby has quit (Ping timeout: 244 seconds).
13:10:45 <esolangs> [[Qpx5997]] N https://esolangs.org/w/index.php?oldid=177613 * Qpx5997 * (+81) Created page with "hey guys, im qpx5997, creator of [[]]. i like object shows too!"
13:11:09 <esolangs> [[Qpx5997]] https://esolangs.org/w/index.php?diff=177614&oldid=177613 * Qpx5997 * (-81) Blanked the page
13:11:59 <esolangs> [[User:Qpx5997]] N https://esolangs.org/w/index.php?oldid=177615 * Qpx5997 * (+81) Created page with "hey guys, im qpx5997, creator of [[]]. i like object shows too!"
13:17:57 <esolangs> [[Qpx5997]] https://esolangs.org/w/index.php?diff=177616&oldid=177614 * Qpx5997 * (+26) Redirected page to [[User:Qpx5997]]
14:04:58 -!- chloetax has joined.
14:18:20 -!- FireFly has quit (Ping timeout: 267 seconds).
14:20:05 -!- FireFly has joined.
14:56:13 <esolangs> [[Qpx5997]] https://esolangs.org/w/index.php?diff=177617&oldid=177616 * Aadenboy * (-26) remove redirect to userspace
16:04:50 -!- lynndotpy609362 has quit (Quit: bye bye).
16:05:54 -!- lynndotpy6093627 has joined.
16:40:32 -!- impomatic has joined.
17:18:22 <esolangs> [[Bit-ter lang]] https://esolangs.org/w/index.php?diff=177618&oldid=177584 * Yayimhere2(school) * (-154) /* Class */ Its a bounded in memory! AND there's no loops!
17:21:15 <esolangs> [[Bit-ter lang]] https://esolangs.org/w/index.php?diff=177619&oldid=177618 * Aadenboy * (+59)
17:27:39 -!- Sgeo has joined.
17:37:20 -!- joast has joined.
17:41:32 <esolangs> [[Countable]] https://esolangs.org/w/index.php?diff=177620&oldid=176989 * Aadenboy * (+141)
17:42:15 <esolangs> [[Countable]] https://esolangs.org/w/index.php?diff=177621&oldid=177620 * Aadenboy * (-74) /* Commands */ this is redundant and WRONG
19:14:46 -!- impomatic has quit (Quit: Client closed).
19:49:44 -!- Lord_of_Life_ has joined.
19:50:17 -!- Lord_of_Life has quit (Ping timeout: 244 seconds).
19:52:37 -!- Lord_of_Life_ has changed nick to Lord_of_Life.
20:31:52 <korvo> Leaving [[Java]] as a redlink has become very funny to me. Big thanks to Past Corbin for placing that bet.
20:32:23 <korvo> A language so non-esoteric that bluelinking it would be pointless.
20:40:12 <int-e> Hehe, this looks a bit janky: https://int-e.eu/~bf3/tmp/shapez2-train-merge.jpg (showing 6 trains arriving all at once at the vortex (central hub); it could be 8 but the 4th direction actually does something useful :)
20:43:14 <korvo> Like one of those animations of assembling a 4-dimensional hyperobject from 3D faces.
20:49:21 -!- Artea has joined.
21:33:59 -!- ais523 has joined.
21:43:41 -!- somefan has joined.
21:48:44 -!- somefan has quit (Remote host closed the connection).
21:58:56 -!- somefan has joined.
23:08:11 <somefan> has anyone visited #anagol on freenode?
23:08:27 <somefan> ref: http://golf.shinh.org/ second para
23:13:06 <fizzie> Logs suggest I was there from 2014-10 to 2021-06.
23:16:27 <somefan> is the server offline? the wholist is completely empty
23:16:50 <somefan> or maybe i've never seen an empty server before
23:18:33 -!- somefan has quit (Remote host closed the connection).
23:18:59 -!- somefan has joined.
23:21:40 <fizzie> I guess it might have just dried up.
23:23:17 <int-e> I almost forgot that Freenode is still a thing, technically.
23:32:51 <somefan> that's sad, should've migrated to libera or someplace before the sweep
23:39:18 -!- impomatic has joined.