00:00:54 <b_jonas> so basically, the United Kingdom now has to sacrifice three or four prime ministers every year in order to be able to delay the Brexit forever and have someone to blame for it
00:01:22 <b_jonas> if only they could start a Ministry of Brexit, so that they only had to sacrifice the Brexit minister, rather than the prime minister
00:16:01 <b_jonas> but maybe the political dragon specifically demands prime ministers
00:27:28 -!- Phantom__Hoover has quit (Quit: Leaving).
00:35:49 <shachaf> Sgeo: Yes, Microsoft's reverse WINE runs in the kernel and does trickery.
00:36:04 <shachaf> But I imagine you could implement it with a debugger or something.
00:37:28 <kmc> you mean WSL?
00:39:49 <b_jonas> Sgeo: nah, I think if there was a need to emulate windows syscalls by catching the actual syscall, then the linux kernel would just grow an api for user processes to do exactly that
00:40:02 <b_jonas> to catch the syscall that is, not to do the whole emulation
00:40:10 <b_jonas> how does UML work by the way?
00:44:16 <kmc> b_jonas: ptrace is already an API to catch syscalls
00:44:33 <kmc> and UML is a different architecture from x86 or whatever
00:44:42 <b_jonas> kmc: yeah, ordinary linux syscalls (all flavors of them), but I don't know if it would catch windows syscalls
00:44:51 <kmc> so I think the "syscalls" are implemented as userspace calls into the user mode linux kernel
00:45:05 <kmc> you can't run ordinary linux binaries in UML, I don't think
00:45:21 <b_jonas> so that's why it didn't work when I just tried to copy an x86 binary?
00:45:22 <kmc> `file /bin/ls
00:45:23 <HackEso> /bin/ls: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 2.6.32, BuildID[sha1]=3c233e12c466a83aa9b2094b07dbfaa5bd10eccd, stripped
00:45:34 <b_jonas> but that would make UML all but useless
00:45:41 <kmc> b_jonas: no because we have these things called compilers
00:45:43 <b_jonas> because nobody would actually compile programs for it
00:45:54 <b_jonas> I don't think it's a different architecture though
00:45:54 <kmc> most of the software people want to run on linux is open source
00:45:59 <HackEso> Linux (none) 4.9.82 #6 Sat Apr 7 13:45:01 BST 2018 x86_64 GNU/Linux
00:46:13 <kmc> well, I might be wrong, it's been forever since I played with uml
00:46:29 <kmc> maybe it does use ptrace for syscall emulation
00:46:34 <kmc> i know it uses it for some weird pagetable manipulation stuff
00:46:45 <b_jonas> maybe there's some interface other than ptrace
00:47:48 -!- FreeFull has joined.
00:52:14 <kmc> shachaf: ham radio : communication :: esoprogramming : programming
00:52:38 <kmc> https://old.reddit.com/r/amateurradio/comments/8lpk45/moon/dzhpm4k/ "the military spent some time and money on this Back In The Old Days, but they stopped doing it because it's dumb as dog shit and horrifically inefficient, which means it is absolutely irresistible for amateur radio operators."
00:52:39 <b_jonas> but maybe I'm taking metaphors too seriously
01:18:46 <shachaf> kmc: Update: Now "/lib64/ld-linux-x86-64.so.2 ./out.a" runs successfully but just running the program fails.
01:20:19 <shachaf> I previously called it "out" but that was either too confusing or not confusing enough.
01:20:59 <kmc> why not call it a.out
01:21:43 <shachaf> I guess, if I called it a.out, it would be an a.out file.
01:36:33 <shachaf> I built a debug musl loader and it's more helpful.
01:39:47 <int-e> I have plenty of ELF files called a.out.
01:40:13 <kmc> shachaf: does it also crash?
01:43:51 <shachaf> It's already crashed in several different ways.
01:44:17 <int-e> does the kernel say anything about it?
01:44:35 <shachaf> It says things like "segfault at 8"
01:45:56 <int-e> So, maybe some symbol didn't get resolved (relocated) properly :)
01:47:45 <int-e> I do wonder how hard it would be to transplant the kernel code into user spaces so it could be traced...
02:03:43 -!- doesthiswork has joined.
02:04:56 <b_jonas> int-e: you can emulate a whole virtual machine and debug the kernel that way
02:20:26 <shachaf> Oh, my PT_PHDR header was wrong, that's why.
02:21:48 <kmc> what's that one
02:25:57 <shachaf> It tells the dynamic linker where to find the segment headers.
02:27:04 <kmc> that sounds pretty important
02:36:17 -!- Sgeo has quit (Ping timeout: 258 seconds).
02:41:43 -!- b_jonas has quit (Quit: leaving).
02:41:50 -!- Sgeo has joined.
02:50:25 <shachaf> It seems kind of silly because it's the first segment itself.
02:50:36 <shachaf> Well, some segment header, maybe not the first.
04:20:10 <int-e> it tells tyhe kernel what to map into memory in the first place
04:20:11 -!- FreeFull has quit.
04:20:48 <int-e> (which /may/ explain the difference between executing the thing and asking ld.so to load it for you...)
04:21:19 <int-e> (all AFAIUI, which isn't very far.)
04:22:34 <shachaf> int-e: No, those are the LOAD segments.
04:23:09 <shachaf> Someone posted this method for 2-out-of-3 secret sharing with xor: https://github.com/wybiral/tshare/blob/master/tshare.go
04:23:22 <shachaf> I feel like there should be a simpler way than that.
04:25:28 -!- doesthiswork has quit (Ping timeout: 268 seconds).
04:27:03 <shachaf> Hmm, https://eprint.iacr.org/2008/409.pdf
04:46:57 <shachaf> What's the simplest possible 2-of-3 sharing scheme? Say for sharing 1 bit.
04:48:45 <int-e> The natural thing to my mind is interpolating a linear polynomial over GF(2^2).
04:51:26 <int-e> But it ends up being more complicated than what you get if you mask part of the messages: http://paste.debian.net/1093525/
04:52:45 <shachaf> Say the bit is b (0 or 1) and we flip a 3-sided coin to a random value r (0 or 1 or 2). We give person p the value (b + r + p) % 3
04:52:58 <shachaf> Wait, that doesn't even let you recover the message, what am I saying.
04:53:22 <shachaf> I was thinking of a different scheme and I obviously simplified it too much.
04:57:55 <int-e> Ah, of course working modulo 3 works. Distribute r, m+r, 2m+r to the parties.
04:58:32 <int-e> (m is the secret message to be shared; r is random modulo 3)
04:59:23 <shachaf> Oh, that's better than the scheme I wrote out.
04:59:40 <shachaf> (I mean, the working scheme I wrote in a text file here, not the one I wrote above which was nonsense.)
05:00:43 <int-e> this is dual to the polynomial interpolation (the message is in the linear term now, not the constant term).
05:14:30 <int-e> shachaf: http://paste.debian.net/1093526/ ... so this can be thought of as polynomial interpolation over GF(2^2) :-)
05:23:14 <int-e> Hah I'm missing a ' at the end.
05:35:16 <int-e> x comes from the representation of GF(2^2).
05:35:56 <int-e> (polynomials in x over GF(2) modulo x^2+x+1)
05:38:16 <int-e> I see what you did there. I don't approve. I should've written "near the end".
07:13:41 -!- cpressey has joined.
07:25:26 <esowiki> [[Talk:An Odd Rewriting System]] https://esolangs.org/w/index.php?diff=64792&oldid=64778 * Chris Pressey * (+361) I admit defeat
07:31:08 <cpressey> Design for a pathological language, take 3: Fix an enumeration Tn of TMs and an enumeration of sentences Sn in Presburger Arithmetic. Input is <Sn,V|I>. Check if Sn is valid (V) or invalid (I). If it matches 2nd element of pair, simulate Tn, else nop.
07:32:16 <cpressey> There's still a problem: you want the two enumerations to be "different enough" from each other, but how do you guarantee that?
07:33:28 <cpressey> Maybe every 100th n there's an instance of PresA that's easy, and a TM that's useful.
07:35:20 <cpressey> But I guess the bigger question is: if I'm so bad at math, why do I even try to do it?
07:41:25 -!- Frater_EST has joined.
07:41:34 -!- Frater_EST has left.
07:49:02 <cpressey> I'm bad at software too, because to be good at software, you need to be charismatic and live in California.
08:01:38 -!- Lord_of_Life has quit (Ping timeout: 248 seconds).
08:02:55 -!- Lord_of_Life has joined.
08:04:51 -!- rodgort has quit (Quit: Leaving).
08:10:00 -!- rodgort has joined.
08:18:36 <esowiki> [[Esolang:Introduce yourself]] https://esolangs.org/w/index.php?diff=64793&oldid=64783 * PCC * (+99)
08:20:36 -!- heroux has quit (Ping timeout: 272 seconds).
08:38:53 -!- b_jonas has joined.
08:39:17 <b_jonas> shachaf: for secret sharing, see David Madore's program with which he has unknowingly won the IOCCC: ftp://ftp.madore.org/pub/madore/misc/shsecret.c
08:42:41 <esowiki> [[What Mains Numbers?]] N https://esolangs.org/w/index.php?oldid=64794 * PCC * (+682) what is What Mains Numbers and how to can you program with it?
08:43:06 -!- user24 has joined.
08:46:27 <esowiki> [[Language list]] https://esolangs.org/w/index.php?diff=64795&oldid=64785 * PCC * (+26) /* W */
09:21:32 -!- arseniiv has joined.
09:29:29 -!- b_jonas has quit (Quit: leaving).
09:34:32 <Taneb> Apparently, version 1.0 of the Haskell Report was published on the first of April 1990
09:34:42 <Taneb> Maybe it's been an elaborate April Fools' joke that got out of hand
09:54:38 -!- shachaf has quit (Ping timeout: 245 seconds).
10:02:40 -!- shachaf has joined.
10:36:37 -!- wob_jonas has joined.
10:37:07 <wob_jonas> Taneb: it certainly got out of hand, but I think it wasn't a joke
10:40:03 <cpressey> It was an April Fool's Serious
10:41:03 -!- heroux has joined.
10:57:02 -!- sebbu has quit (Quit: reboot).
11:19:48 -!- user24 has quit (Quit: Leaving).
11:23:54 -!- FreeFull has joined.
11:24:02 -!- oklopol has joined.
11:24:54 -!- FreeFull has quit (Client Quit).
11:25:59 <shachaf> $ ldd out.a statically linked
11:26:22 <shachaf> out.a: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/l, not stripped
11:27:34 <int-e> so what does ldd do? collect the shared objects linked in and spout that message if it comes up with nothing?
11:28:08 <shachaf> I'm not sure what ldd does.
11:28:52 -!- FreeFull has joined.
11:29:13 <shachaf> whoa, I didn't know about pldd
11:29:17 <int-e> "ldd invokes the standard dynamic linker with the LD_TRACE_LOADED_OBJECTS environment variable set to 1."
11:29:23 <wob_jonas> note that objdump is a cross-utility, it can read the executables of any platform on any platform
11:30:21 <shachaf> ...I also didn't know that ldd was a shell script.
11:30:43 <shachaf> Or that it used that mechanism.
11:30:53 <shachaf> Only platforms it knows about.
11:31:02 <wob_jonas> huh, didn't ldd use to use a more esoteric interface to communicate with the dynamic linker, where instead of an env-var, it invoked the program with argc being zero?
11:31:11 <shachaf> objdump won't tell me anything I don't already know, since I generated this ELF file myself byte by byte.
11:31:41 <shachaf> I mean, it won't tell me anything about my program.
11:31:58 <wob_jonas> well, it could tell you something if you don't fully understand how the ELF format works
11:31:59 <shachaf> The idea was to learn what wasn't compliant about it.
11:32:43 <shachaf> Man, using ld.so totally messes up my nice strace output.
11:32:51 <wob_jonas> like if you made a mistake or something
11:33:02 <shachaf> `` strace -fo tmp/OUT /bin/true
11:33:41 <HackEso> https://hack.esolangs.org/tmp/OUT
11:33:59 <shachaf> What! That's a lot nicer than I get on my system.
11:34:25 <shachaf> $ strace /bin/true |& grep 'ld\.so\.nohwcap' | wc -l
11:39:33 <shachaf> $ strace /bin/true |& wc -l
11:39:54 <shachaf> Anyway I guess I should try calling into libc and then ldd will probably call it dynamic.
11:40:16 <shachaf> But for that I'd need a bunch of things like a PLT and real relocations or something.
11:40:28 <shachaf> My "assembler" has very primitve fixups for local jumps but that's it.
11:40:56 <wob_jonas> ``` objdump -x /bin/true # x86_64 here too
11:40:56 <HackEso> \ /bin/true: file format elf64-x86-64 \ /bin/true \ architecture: i386:x86-64, flags 0x00000150: \ HAS_SYMS, DYNAMIC, D_PAGED \ start address 0x0000000000001670 \ \ Program Header: \ PHDR off 0x0000000000000040 vaddr 0x0000000000000040 paddr 0x0000000000000040 align 2**3 \ filesz 0x00000000000001f8 memsz 0x00000000000001f8 flags r-x \ INTERP off 0x0000000000000238 vaddr 0x0000000000000238 paddr 0x0000000000000238 align 2**0 \
11:42:07 <shachaf> Man, you need a hash table and GOT and probably a GNU hash table and all sorts of things.
11:43:03 <wob_jonas> shachaf: maybe they differ in /proc settings about address randomizatio or something?
11:43:04 <shachaf> Oh, running ld.so directly tells me what's wrong:
11:43:24 <shachaf> "error while loading shared libraries: [...]: ELF load command address/offset not properly aligned"
11:43:44 <shachaf> That's a very legitimate complaint, ld.so.
11:44:51 <shachaf> Oh, no, that's what it says on the *statically linked* file.
11:49:26 -!- Sgeo has quit (Read error: Connection reset by peer).
11:49:51 -!- Sgeo has joined.
11:52:48 <shachaf> Oh, what do you know, it's not properly aligned.
11:52:49 <int-e> shachaf: do you have an LD_LIBRARY_PATH set? I get strace /bin/true 2>&1 | wc -l => 73 and LD_LIBRARY_PATH= strace /bin/true 2>&1 | wc -l => 25...
11:54:50 <shachaf> I have an LD_PRELOAD, courtesy of Ubuntu.
11:55:06 <shachaf> Because Ubuntu is ridiculous in many ways.
11:55:16 <shachaf> I've probably mentioned how bad this LD_PRELOAD is before.
11:55:18 <int-e> really, what does Ubuntu deam important enough to LD_PRELOAD?
11:55:45 <cpressey> I feel left out. I'm running Ubuntu and I don't have a LD_PRELOAD.
11:55:50 <shachaf> So GTK or GNOME decided to switch to drawing decorations in the client and requesting borderless windows from the WM at one point.
11:55:51 <wob_jonas> int-e: some graphics toolkit thing
11:56:27 <shachaf> This only works particularly well if you're running GNOME. And there's no configuration to disable it. So if you don't run GNOME, they set you up with an LD_PRELOAD that forces GTK to use the old behavior.
11:56:53 <int-e> Fancy. And awkward.
11:57:09 <shachaf> This is definitely the most reasonable way to do things, rather than, say, patching the source to check an environment variable for using the old behavior.
11:57:26 <shachaf> Or patching the source in any other way. That's not Ubuntu's business.
11:57:58 <shachaf> Anyway I'm stuck with this LD_PRELOAD which constantly makes things fail in annoying ways.
11:58:29 <shachaf> For example Nix programs run with a different library path so they can't find the GTK wrapper and they print an error message whenever I run them.
12:00:56 <cpressey> This must be an Ubuntu 18.04 thing, I'm still running 16.04. What happens if you override LD_PRELOAD?
12:01:24 <cpressey> Maybe I don't actually want to know
12:01:42 <int-e> Could this be specific to Unity (and hence primarily Ubuntu)?
12:02:26 <shachaf> I don't remember whether Ubuntu uses Unity or GNOME by default?
12:02:42 <shachaf> But I think this is a GTK-wide or GNOME-wide decision.
12:02:51 <shachaf> https://wiki.gnome.org/Initiatives/CSD
12:03:32 <shachaf> cpressey: If I override LD_PRELOAD then most things work slightly better except for GTK programs which work quite a bit worse.
12:04:15 <wob_jonas> what if you use wrappers for GTK programs that restore the LD_PRELOAD?
12:04:40 <Taneb> Ubuntu uses Gnome3 by default in recent versions
12:04:50 <shachaf> But who can know what programs are GTK programs?
12:05:00 <int-e> shachaf: Ah so it's a nasty surprise still in the making.
12:05:35 <int-e> firefox, thunderbird, emacs, inkscape, gucharmap... are my main gtk apps?
12:06:37 <int-e> (Emacs has several frontends but I'm pretty sure the gtk one is what I'm using. I expect it's still gtk2 and won't be affected for a while yet.)
12:06:47 <wob_jonas> shachaf: ask the package manager what programs it would uninstall if you decided to uninstall gtk
12:07:16 <shachaf> Also GTK is a mess in many other ways.
12:07:36 <shachaf> It does theming in a particular way, but if you run something called a settings-daemon then it starts doing theming in a completely different way.
12:07:39 <Taneb> They can't even decide what the G stands for
12:07:56 <shachaf> And half of your programs work well with a high-DPI screen one way, and half the other way.
12:08:06 <int-e> Oh, gimp of course. Forgetting about that one is embarrassing. :)
12:08:40 <shachaf> I tried running a settings-daemon not long ago and it was so terrible that I stopped.
12:08:52 <shachaf> Despite it being the only way to make something work.
12:09:05 <shachaf> The year of Linux on the desktop is now.
12:09:37 <shachaf> But don't worry. As soon as I write this compiler I'll write some good GUI programs with it.
12:10:41 <shachaf> OK, there's no definite compiler planned. But I did write some UI programs using plain X11+OpenGL.
12:11:48 <shachaf> They're surely way better than some kind of GTK nonsense that prints a bunch of dbind-warnings whenever you run it.
12:13:36 <shachaf> At least it's not kbuilding any sycocas.
12:14:59 -!- ais523 has joined.
12:16:04 <ais523> <AMD64 Architecture Programmer’s Manual Volume 3:
12:16:05 <ais523> General-Purpose and System Instructions> "PEXT Parallel Extract Bits \ Copies bits from the source operand, based on a mask, and packs them into the low-order bits of the destination. Clears all bits in the destination to the left of the most-significant bit copied."
12:16:42 <ais523> did they seriously add select from INTERCAL to the x86 instruction set?
12:16:59 <ais523> although this version is 32-bit or 64-bit, rather than 16-bit or 32-bit
12:17:33 <ais523> it's part of the BMI2 instruction set, which my processor apparently supports
12:17:51 * ais523 has an urge to feature-test this during C-INTERCAL's build process and use the asm instruction if supported
12:18:07 <wob_jonas> ais523: yes. some call it sheep and goats.
12:18:24 <wob_jonas> ais523: you can use the 32-bit one to emulate the 16-bit one though
12:18:58 <wob_jonas> ais523: you can probably use a gcc intrinsic and an MSVC instrinsic, with ifdefs, rather than an inline asm
12:19:37 -!- heroux has quit (Read error: Connection reset by peer).
12:19:53 <ais523> inline asm is more fun
12:19:57 -!- heroux has joined.
12:20:25 <shachaf> Microsoft doesn't support inline assembly on x64.
12:21:47 <ais523> that doesn't really matter, C-INTERCAL has a really robust autoconf/automake setup and this is the sort of random thing autoconf is designed for
12:22:27 <shachaf> Does autoconf even work on Windows?
12:22:54 <shachaf> autoconf is awful and I hate its ./configure scripts.
12:23:19 <shachaf> Most of what it does isn't useful and hasn't been useful for decades, and it has real and significant costs.
12:23:21 <ais523> it works about as well as sh and friends do
12:23:36 <ais523> fwiw, I agree with you about autoconf solving entirely the wrong problem
12:23:48 <ais523> but for C-INTERCAL in particular this felt like an upside rather than a downside
12:23:51 <shachaf> If they cared, autoconf people could at least make the configure scripts much faster, but I don't imagine they do, or maybe there just are no autoconf people.
12:23:57 <ais523> it is not the most serious of projects
12:24:15 <shachaf> Sure, for C-INTERCAL you can get an exception.
12:24:33 <shachaf> Though I feel like autoconf isn't even the enjoyable kind of esocomplexity.
12:24:40 <shachaf> It's just nonsense complexity that makes things bad.
12:25:14 <wob_jonas> ais523: https://docs.microsoft.com/en-us/cpp/intrinsics/x64-amd64-intrinsics-list?view=vs-2019 suggests that _pext_u64 is the intel-standard intrinsic, though I'll have to check that in the intel architecture manual
12:25:40 <wob_jonas> if that's right, then that will work the same on gcc and msvc, because gcc has headers implementing all that stuff based on gcc builtins
12:25:41 <ais523> tbh I'm not sure if C-INTERCAL even compiles on Windows
12:25:48 <ais523> I got it compiling on /DOS/ once but that's different
12:25:48 <cpressey> shachaf: https://github.com/GregorR/autoconf-lean
12:26:12 <cpressey> By a person who used to hang out here frequently once
12:26:50 -!- j-bot has quit (Ping timeout: 244 seconds).
12:27:02 <wob_jonas> yeah, the intel architecture reference confirms that _pext_u32 and _pext_u64 are the functions corresponding to the PEXT instruction
12:27:21 <int-e> cpressey: He still turns up once every blue moon.
12:27:23 <esowiki> [[Language list]] https://esolangs.org/w/index.php?diff=64796&oldid=64795 * Hanzlu * (+10)
12:27:34 <wob_jonas> it's probably still worth to test for this in the autoconf, but it should work
12:28:21 <int-e> cpressey: and of course umlbox is still actively used
12:28:31 <wob_jonas> the gcc headers even define these so that they emulate the same operation even if you compile to older instruction sets or non-x86 cpu
12:30:00 <ais523> ugh, is it correct to write this instruction as asm or as machine code?
12:30:16 <ais523> I guess it has to be asm so that gcc can participate in register allocation
12:31:04 <int-e> That's certainly the preferable way, if you want to shun the compiler intrinsic.
12:31:25 <shachaf> It's possible to prefer it, but not mandatory.
12:32:19 <ais523> yes, but this is INTERCAL, so I have to give at least passing thought to the idea that writing it as raw bytes would mean you didn't have to worry about what syntax the assembler used
12:32:40 <ais523> wob_jonas: what header files are those even in?
12:32:56 <ais523> tbh checking for inline asm support in autoconf is probably easier than checking for a specific header file
12:33:30 <wob_jonas> ais523: look up the header file name and the type of the function at https://docs.microsoft.com/en-us/cpp/intrinsics/x64-amd64-intrinsics-list?view=vs-2019
12:34:05 <ais523> hmm, neat, seems like both gcc and clang support it
12:34:22 <ais523> that'd clearly be the better way to do things, which gives a reason to avoid it
12:35:26 <wob_jonas> ais523: yes, this happens to most of the new x86 instructions; it's only old instructions BSF and BSR that fall through the cracks and have like three different sets of compiler intrinsics that you have to ifdef between, because neither msvc supports the gcc builtins nor backwards
12:36:04 <wob_jonas> I contributed the parts of http://software.schmorp.de/pkg/libecb.html where it can use the MSVC wrappers for BSF and BSR, which is why I know
12:36:15 <ais523> is fused multiply-add broken the same way? it's the instruction that's different between Intel and AMD due to a lack of coordination
12:36:57 <wob_jonas> you should note that even though msvc and gcc both support this, the semantics differ: on msvc, the intrinsic will just emit that instruction even if you're compiling for an older cpu,
12:37:46 <wob_jonas> for gcc it emits something that gives the same computation result as that instruction would perform, which for such new instructions won't actually call that instruction, unless you're explicitly compiling with a high -march
12:38:09 <wob_jonas> I don't know, I don't follow how the fused multiply-add and all that neural network nonsense worked, sorry
12:39:55 <ais523> wob_jonas: it's a silly history
12:40:08 <ais523> AMD and Intel came out with incompatible implementations of the same instruction
12:40:21 <esowiki> [[Special:Log/newusers]] create * RetroBug * New user account
12:40:27 <ais523> then both dropped their own version of it and implemetend the other's, so they're still incompatible but in the other direction
12:42:36 <esowiki> [[Esolang:Introduce yourself]] https://esolangs.org/w/index.php?diff=64797&oldid=64793 * RetroBug * (+68)
12:43:05 <shachaf> One fun fact about ELF is that the ELF 64 standard says hash table entries are 64 bits, but most implementations use 32 bits.
12:43:22 <shachaf> I think that means the standard is wrong rather than the implementations.
12:47:38 <ais523> hmm, I suspect this inline asm version may actually be substantially faster than what was there before; performance improvements are great!
12:47:42 <ais523> now, I wonder how best to do mingles
12:48:09 <ais523> AVX and friends have mingle instructions, but sadly they only mingle at the byte level
12:51:11 <wob_jonas> ais523: that's what the opposite instruction PDEP is for. if you have PEXT, you also have PDEP.
12:51:40 <ais523> shachaf: alternates bits in two operands to form a combined operand of twice the width
12:51:48 <ais523> wob_jonas: right, two PDEPs and an OR would do it
12:53:19 <wob_jonas> ais523: but intercal code often uses mingle followed by an intercal bitwise followed by selecting the odd or even bits, which you can optimize to just a bitwise op
12:54:03 <ais523> yes, C-INTERCAL does that optimisation already
12:54:47 <wob_jonas> I mentioned at some point that I think intercal code could use that redundant representation of integers that's base 2 but digits go from -2 to 1, because you can do arithmetic on that representation with the intercal ops faster
12:55:33 <ais523> yes but it's way harder to store in memory
12:56:25 <shachaf> I feel like there are very limited uses for inline assembly nowadays.
12:56:33 <shachaf> Almost everything is covered by either top-level assembly or intrinsics.
12:56:53 <shachaf> So Microsoft's decision is perhaps reasonable.
12:57:03 <shachaf> What are uses for inline assembly?
12:58:13 <ais523> shachaf: out-optimising the compiler is one thing
12:58:25 <izabera> accessing some specific instructions
12:58:49 <wob_jonas> ais523: no it's not. you just store it as two integers, and they represent their difference
12:58:59 <izabera> like rdrand or rdtsc or...
12:59:00 <ais523> this morning, I was curious about the following problem: suppose you have a function that generates a sequence of ints and can't be parallelised
12:59:14 <wob_jonas> I'll have to clear this up at some point, but now I don't know how they work
12:59:15 <ais523> what's the fastest way to store the generated ints into memory, assuming that there are too many to fit into the L2 cache?
12:59:44 <shachaf> But at what point do you need to out-optimize the compiler within a function?
13:00:20 <shachaf> I think in most such cases you end up wanting to write the whole function in assembly.
13:00:20 <ais523> gcc's and clang's approaches were utterly different, but very comparaible in speed; I tried a few other things on my own, and eventually found one that was slightly but consistently faster
13:00:26 <shachaf> Specific instructions sounds like what intrinsics are for.
13:00:51 <lambdabot> Local time for shachaf is Tue Jul 30 06:00:49 2019
13:01:05 <ais523> shachaf: well, in my case, the loop was still written in C
13:03:16 <ais523> funnily enough, I decided to use a repeated rotate-left as a standin for the "function that generates a sequence of ints and can't be parallelised" (yes, I know you can parallelise that in practice)
13:03:46 <ais523> and the compiler didn't recognise it, so I ended up writing the "add %0, %0\n\tadc $0, %0" manually
13:08:05 -!- sebbu has joined.
13:08:45 <ais523> actually my experience is that even modern compilers are fairly bad at micro-optimisation, they're just good at knowing about more long-range optimisations that humans don't often think of
13:10:00 <esowiki> [[ACL]] https://esolangs.org/w/index.php?diff=64798&oldid=64789 * Hanzlu * (+204)
13:12:00 <esowiki> [[ACL]] https://esolangs.org/w/index.php?diff=64799&oldid=64798 * Hanzlu * (-2)
13:15:55 <esowiki> [[What Mains Numbers?]] M https://esolangs.org/w/index.php?diff=64800&oldid=64794 * A * (+166) 2019 esolang
13:17:26 <esowiki> [[What Mains Numbers?]] M https://esolangs.org/w/index.php?diff=64801&oldid=64800 * A * (+18) No
13:20:47 <ais523> OK, C-INTERCAL repo updated with the use of inline asm for PEXT
13:22:27 <ais523> our exiting mingle is fairly optimised as it is
13:23:26 <esowiki> [[What Mains Numbers?]] M https://esolangs.org/w/index.php?diff=64802&oldid=64801 * A * (+159)
13:23:29 <ais523> that said, it's still a /lot/ of instructions
13:24:28 <ais523> hmm, I wonder how you ask gcc to pick an arbitrary temporary for you
13:24:37 <wob_jonas> ais523: you know that Warren's "Hacker Delight" talks about the mingling (shuffling) and selecting, right? I don't recall what it says, but it definitely talks about them.
13:24:42 <esowiki> [[What Mains Numbers?]] M https://esolangs.org/w/index.php?diff=64803&oldid=64802 * A * (-22)
13:24:43 <ais523> maybe just say "register int temp;" and assign to it without reading it
13:25:16 <wob_jonas> ais523: I don't think you even need "register" if it's arbitrary
13:25:29 <ais523> oh, duh, you just do it one instruction at a time
13:25:55 <ais523> wob_jonas: well, it has to actually /be/ a register, although gcc's =r hint is sufficient to teach it about that
13:26:14 -!- j-bot has joined.
13:26:43 <wob_jonas> as in, fourth argument or something
13:26:45 <ais523> clobbers have to be fixed in the source code, though
13:26:59 <esowiki> [[What Mains Numbers?]] https://esolangs.org/w/index.php?diff=64804&oldid=64803 * A * (+133)
13:27:03 <ais523> I think the correct thing to do is to just make the temporary visible to gcc explicitly so that it can do SSA and friends on it
13:27:06 <ais523> and spills, and the like
13:28:36 <esowiki> [[What Mains Numbers?]] M https://esolangs.org/w/index.php?diff=64805&oldid=64804 * A * (-3)
13:30:31 <esowiki> [[What Mains Numbers?]] M https://esolangs.org/w/index.php?diff=64806&oldid=64805 * A * (+19)
13:30:32 <oklopol> ais: about compilers being bad about micro-optimization, https://m.youtube.com/watch?v=bSkpMdDe4g4
13:31:49 <oklopol> I found some of those impressive, sums compressed to formulas, multiplication turned differently into combinations of bit shifts etc.
13:32:07 <oklopol> (probably very basic stuff, I'm no expert)
13:32:08 -!- wob_jonas has quit (Ping timeout: 245 seconds).
13:33:54 -!- wob_jonas has joined.
13:34:38 <esowiki> [[What Mains Numbers?]] https://esolangs.org/w/index.php?diff=64807&oldid=64806 * A * (+22)
13:35:47 <cpressey> ais523: you should mark it "volatile volatile"
13:37:20 <ais523> oklopol: AMD's optimisation guide has a list of constants for which it's worth using alternative code to multiply by them
13:37:36 <ais523> the smallest nonnegative integer for which IMUL is the fastest way to multiply by that integer is 22
13:37:44 <ais523> for every smaller integer, there's some trick
13:37:58 <ais523> (disappointingly, they didn't even bother to list the tricks for multiplying by 0 or 1)
13:38:27 <oklopol> ais: yes that sort of stuff, optimizing mul by constant to shifts, and also vice versa if you try to be clever :P
13:39:00 <oklopol> And differently based on what you're compiling for
13:41:05 <ais523> optimising to shifts is boring, the /real/ trick on x86 is to use the AGU to do multiplications by unexpected numbers
13:41:47 <ais523> e.g. for multiply by 9, AMD suggests "lea reg1, [reg1 + reg1 * 8]"
13:41:48 <oklopol> This is also shown on the vid iiuc
13:42:04 <oklopol> Yes that's automatically done by optimizers
13:42:05 <ais523> btw, LEA is still a total hack :-)
13:42:20 <ais523> I'd expect any compiler developer who cares about optimization to have read this document already
13:42:22 <esowiki> [[What Mains Numbers?]] M https://esolangs.org/w/index.php?diff=64808&oldid=64807 * A * (+228)
13:43:56 <esowiki> [[What Mains Numbers?]] https://esolangs.org/w/index.php?diff=64809&oldid=64808 * A * (-34) An infinite loop in a language that only provides finite loops!
13:44:52 -!- wob_jonas has quit (Ping timeout: 272 seconds).
13:46:26 <esowiki> [[What Mains Numbers?]] M https://esolangs.org/w/index.php?diff=64810&oldid=64809 * A * (+43) /* Infinite loop */
13:50:12 <esowiki> [[What Mains Numbers?]] https://esolangs.org/w/index.php?diff=64811&oldid=64810 * A * (+61)
13:51:40 <ais523> OK, mingles are now also hardware-accelerated
13:54:08 <esowiki> [[What Mains Numbers?]] https://esolangs.org/w/index.php?diff=64812&oldid=64811 * A * (-27) /* What Mains Numbers? */
13:55:47 <esowiki> [[What Mains Numbers?]] https://esolangs.org/w/index.php?diff=64813&oldid=64812 * Ais523 * (+124) Mixed undo revisions 64809, 64810 by [[Special:Contributions/A|A]] ([[User talk:A|talk]]): not an infinite loop, it just allocates so much memory that it'll probably thrash nearly-indefinitely
13:56:05 <ais523> A should stop jumping to assumptions :-(
13:56:39 <ais523> btw, I had a great idea about how pointers should work
13:57:04 <ais523> instead of pointing to the start or end of an object, they should point to the middle (this means adding an extra bit so you can point into the middle of a byte)
13:57:33 <ais523> this assumes that all your allocations are power-of-2-sized and aligned, otherwise there's no real gaini
13:57:48 -!- wob_jonas has joined.
13:58:06 <wob_jonas> exactly in the middle of objects? hmm
13:58:17 <ais523> but if you have that, then the middle-pointer uniquely specifies both the memory you're accessing and the width of it, which should make things like hardware bounds checking efficiently possible
13:58:55 <Taneb> How does it store the width?
13:59:07 <ais523> on x86_64 you could make up for the extra bit at the end by dropping bit 62, it's never going to get used anyway
13:59:13 <ais523> Taneb: count the number of trailing zeroes
13:59:39 <ais523> objects are power-of-2-sized and aligned, thus the middle is aligned with respect to half the object's size but misaligned with respect to the object's full size
14:00:11 <ais523> thus, you can use the alignment to determine the size, without ever having a pointer that's randomly more aligned than it should be
14:00:43 <wob_jonas> that won't give exact bounds checks though, only bounds checks rounded up to a power of two or something close
14:00:57 <ais523> well, you only allocate objects in power-of-2 sizes
14:01:05 <ais523> (there are good reasons for a malloc to do that anyway)
14:01:14 <ais523> the main issue is structs, I think
14:01:44 <wob_jonas> you can also do a fibonacci version of this alignment scheme, just to screw with people
14:01:57 <int-e> Or you can add 3*2^k into the mix for fun.
14:02:06 <ais523> the algorithm of "just allocate in the first available aligned address of the appropriate size" is great, when you use power-of-two sizes only it actually works
14:02:12 <int-e> (That seems easier than fibonacci.)
14:02:24 <int-e> (Also Fibonacci seems awful for alignment.)
14:02:25 <wob_jonas> allocate only objects of fibonacci size, at addresses whose address in zeckendorf end with as many zeroes
14:02:34 <ais523> yes, the fibonacci version is definitely in the screwing-with-people realm
14:02:47 <wob_jonas> int-e: only on current cpus, which use 64-byte 64-aligned cache lines
14:03:10 <ais523> I have a suspicion that 64-byte will be the correct size for a cache line for the foreseeable future
14:03:17 <int-e> When will we move to 128? Also, RAM rows enter the picture as well at some point.
14:03:59 <ais523> just like my tests indicate that 16 bytes is the correct size for a bulk write to memory (if you're getting the data as individual ints rather than a bulk read)
14:04:33 <ais523> int-e: cache lines are weird, ideally you'd want them to be /smaller/, the only reason to have them that large is to reduce the amount of bookkeeping you have to do
14:05:02 <ais523> a larger cache line would mean that you had so many of the things that you could afford to often waste data space in the L1 cache, but were very tight on bookkeeping cache
14:05:10 <ais523> which seems implausible with modern processor designs
14:05:39 <ais523> I guess maybe L2 would benefit from longer cache lines?
14:05:40 <wob_jonas> int-e: in a hypothetical cpu that has 55 and 89 byte cache lines, aligned to fibonacci round addresses
14:05:46 <ais523> but there are obvious reasons to want them the same size as L1
14:05:54 -!- oklopol has quit (Ping timeout: 258 seconds).
14:05:56 <int-e> wob_jonas: I'm not going there.
14:07:36 <ais523> I wonder what the performance of a malloc that, for large objects, just maps a ridiculous amount of memory as MAP_NORESERVE and relies on the kernel to do the actual allocations when page faults happen
14:08:09 <ais523> (the page faults were going to happen anyway, so there seems to be no particular reason to do anything at other times)
14:08:44 <ais523> the huge advantage of this is that realloc becomes a nop, which helps make your write loops tighter
14:09:06 <wob_jonas> ais523: yeah, but that doesn't work too well when you allocate a lot of small objects, which is a common case
14:09:14 <wob_jonas> also you don't have infinite address space
14:09:18 <ais523> you need a different algorithm for small objects, yes
14:09:25 <ais523> so that they don't need a separate page
14:09:32 <ais523> but you do pretty much have infinite address space
14:09:58 <ais523> you can allocate 4 GiB for every object and still have 32 bits left
14:10:08 <wob_jonas> the kernel has to do bookkeeping for what you allocate
14:10:22 <wob_jonas> no, you don't have 64 bits of virtual address space
14:10:35 <ais523> it's, what, 48 bits on modern processors?
14:10:42 <wob_jonas> that's just what the architecture allows us to expand the address space without breaking binary compatibility
14:11:10 <ais523> because half the virtual address space is reserved for kernel-internal use
14:11:22 <wob_jonas> (and that too only if people don't start using high bits for tag bits when they have perfectly usable low bits instead, like they did in the 32-bit era and ended up with a prolog interpreter that couldn't use more than 256 megabytes of memory)
14:11:46 <ais523> even so, that's still 32767 self-reallocing objects, there are plenty of programs that are unlikely to use anywhere near that many
14:12:10 <ais523> wob_jonas: they can't, x86_64 actually intentionally crashes if it sees a high bit used as a tag bit
14:12:13 <wob_jonas> I don't know how many bits we have now, they keep changing that every decade or so, I'm not following
14:12:39 <wob_jonas> if you explicitly mask it off before using it as an address, it will work
14:12:58 <ais523> right, because the processor can't see how the value was derived
14:13:11 <wob_jonas> but low bits is still easier, because if you know all the low bits, you can usually remove them by just using the right offset
14:13:35 <wob_jonas> most people do get this right though, so it's not much of a worry
14:13:54 <wob_jonas> that one prolog interpreter was more just an unfortunate exception
14:14:03 <ais523> anyway, one thing that's really annoying is that malloc() is not async-signal-safe
14:14:20 <ais523> the first-power-of-2 technique can be implemented lock-free, I think
14:14:37 <ais523> in which case it probably should be, so that people can allocate memory in their signal handlers without deadlocks
14:15:14 <wob_jonas> yeah, you're right, 48 bits of virtual address space now, I think
14:17:12 <wob_jonas> ais523: really? do you mean even without a small performance penalty for the common case of sane programs that don't try to alloacte from a signal handler?
14:17:50 <wob_jonas> if you really want to allocate from a signal handler, then use a custom more expensive allocator for those parts of the code that may run from a signal handler
14:18:06 <ais523> wob_jonas: well you need to use a lock or atomic /somewhere/
14:18:29 <wob_jonas> but I think it's usually better to just not do anything fancy from a signal handler
14:18:29 <ais523> I think there's debate about which is faster in the common, non-contended case, but I'm guessing they're much the same
14:18:58 <ais523> and when there's no contention the algorithm runs quickly (unless there's /so much/ contention that the processor starts predicting the branch as taken, which is likely to be the least of your issues)
14:20:41 -!- ARCUN has joined.
14:21:35 <ARCUN> Anyone know any good FPGAs? I need one for my esoteric computer.
14:22:50 <ais523> in my experience, FPGA toolchains are really terrible
14:23:15 <ais523> as for the FPGAs themselves, for the majority of tasks, either most FPGAs will be good enough or affordable FPGAs won't b e good enough
14:23:17 -!- ARCUN has quit (Remote host closed the connection).
14:23:27 <ais523> so the main difficulty is finding a way to wire them up to your computer
14:24:18 <wob_jonas> why? don't those FPGAs have IO devices built in?
14:25:24 -!- ARCUN has joined.
14:25:58 <cpressey> They're field programmable, you see.
14:26:24 <cpressey> If you happen to be in a forest, tough luck.
14:26:30 <ARCUN> I was thinking of using an Altera cyclone ii mini to use, but I heard that the Spartan series is good too
14:28:47 <ARCUN> One of the main problems is, how would I get it to display items on the screen? VHDL really doesn't make this any easier, as it's not be most consice of languages
14:29:52 -!- ARCUN has quit (Remote host closed the connection).
14:31:52 <cpressey> https://github.com/stacksmith/fpgasm
14:32:26 <ais523> hmm, so in a quick test, Linux was quite happy to allocate me 16 GiB of address space in one large mapping
14:32:58 <ais523> even though I don't have that much memory in physical or swap space or both combined
14:33:12 <wob_jonas> well sure, many computers these days have 16 GB physical memory
14:33:28 <ais523> and I could read/write random addresses in it without any obvious performance issues
14:34:36 <wob_jonas> but won't the kernel still need to keep about 1/1000 the size of that virtual memory for administration?
14:34:42 <ais523> this leads me to suspect that the most efficient way to deal with memory, if you don't care about getting segfaults for wild accesses, is to only ask the kernel for memory once in the lifetime of the program, and use writes to memory to allocate it and madvise to free it
14:34:58 <ais523> wob_jonas: page caches have multiple levels nowadays
14:35:02 <wob_jonas> unless you use large pages that is, but large pages would defeat the problem
14:35:15 <wob_jonas> ais523: sure, but ... I don't know how that works in the kernel
14:35:30 <ais523> also I don't see how the same problem doesn't happen even if you allocate a bit at a time and use brk and mmap and whatever to request more as you need it
14:35:41 <ais523> err, "a small amount at a time", not a literal bit :-)
14:36:19 <wob_jonas> sorry, I was trying to argue against the method you mentioned above, of allocating 4G for every large object
14:36:25 <ais523> does MADV_REMOVE work with anonymous mappings, I wonder?
14:38:36 <ais523> hmm, I wonder if any memset implementations use madvise to zero memory? I'm guessing not, it'd be insane
14:38:45 <ais523> but memset is the sort of function where insane optimisations can make sense
14:39:10 <ais523> (the idea would be to swap out the page backing the memory you're trying to zero for a freshly zeroed page)
14:39:30 <ais523> I don't know whether Linux has a background memory zeroing daemon (or equivalent); I know Windows does
14:39:37 <wob_jonas> I don't think it would help much, in the long run, as long as you're using memset for memory you want to use later, because the kernel has to zero the page eventually
14:40:06 <ais523> Windows has a supply of pre-zeroed physical memory pages that it hands out to applications, and zeroes pages in the background after they're unmapped
14:40:37 <wob_jonas> yeah. I think linux has something like that too
14:42:18 <ais523> it doesn't run constantly, only when the number of zeroed pages is low
14:42:27 <ais523> if it gets very low the kernel foregrounds the page-zeroing task so that it never runs out
14:44:37 -!- ais523 has quit (Remote host closed the connection).
14:44:50 -!- ais523 has joined.
14:49:27 <cpressey> Should I learn LLVM assembly or should I not bother
14:50:49 -!- oklopol has joined.
14:51:16 -!- wob_jonas has quit (Ping timeout: 246 seconds).
14:51:36 <Taneb> cpressey: that's up to you
14:53:00 <ais523> I'd say, only if you want to use it for something
14:53:14 <ais523> or if you're interested in SSA-based languages in general
14:53:33 <ais523> it's really a multiple-level language, though, it can express a lot of different levels of abstraction and is designed to compile into lower abstraction levels of itself
14:53:41 <ais523> (this is a common property for compiler intermediate representations)
14:53:50 <ais523> so really, "learning LLVM" is about learning a specific subset of it
14:55:08 <ais523> whatever you need for whatever it is you're doing
14:55:49 <cpressey> I have two compiler projects that became dead ends because I tried to generate C and it just got frustrating and boring and I abandoned them.
14:56:40 <ais523> I think generating C is generally easier than generating LLVM, also less platform-specific
14:56:49 <esowiki> [[ACL]] https://esolangs.org/w/index.php?diff=64814&oldid=64799 * Hanzlu * (+1117)
14:56:55 <ais523> (LLVM is slightly platform-specific, enough so that you can't really generate "portable LLVM")
14:57:15 <ais523> perhaps WebAssembly would be an interesting target to use instead, that's fairly regular as ASMs go
14:57:40 <cpressey> I'll just leave them as dead ends
14:57:55 -!- cpressey has quit (Quit: WeeChat 1.4).
15:02:17 -!- ais523 has quit (Quit: quit).
15:16:06 -!- doesthiswork has joined.
15:20:30 -!- wob_jonas has joined.
15:21:23 <wob_jonas> I was in London for the weekend. It seems that the stores sell milk in both one liter size and a size slightly larger than one liter, the latter is apparently somewhat round in some non-metric measurement unit.
15:22:21 <wob_jonas> Also they sell half liter and two liter bottles. I still find that strange. Half liter milk bags used to exist here, but only a very long time ago, and I've only ever seen ones larger than one liter abroad.
15:28:15 <Taneb> I sometimes buy the half-litre bottles if I'm thirsty when I'm out and about
15:38:26 -!- lldd_ has joined.
15:42:15 <kmc> drinking milk as a beverage is weird to me
15:43:28 <Taneb> It's weird to a lot of people
15:43:44 <wob_jonas> kmc: is that because you live in a place where you can't easily buy fresh milk, only
15:44:20 <wob_jonas> UHT milk? because fresh milk tastes much better, but I know it's not available everywhere
15:44:41 <Taneb> But like, it's cheaper and healthier (here at least) than soft drinks
15:46:17 <Taneb> ...now I'm thirsty
15:48:10 -!- FreeFull has quit.
15:51:10 -!- wob_jonas has quit (Remote host closed the connection).
16:12:38 <kmc> UHT milk isn't common in the USA
16:12:53 <kmc> we mostly have regular pasteurized milk
16:12:57 <kmc> which needs to be refridgerated
16:13:22 <kmc> I bought some lemonade the other day, didn't notice it was unpasteurized... within less than a week the bottle had puffed up to almost a round cylinder
16:13:39 <kmc> I started unscrewing it in the sink and the cap came off with a bang
16:17:42 <kmc> probably the "slightly larger than one liter" was 2 imperial pints?
16:18:00 <kmc> it's great how the UK's non-metric unit isn't even the same as the US's non-metric unit of the same name
16:18:18 <kmc> 2 imperial pints is a bit more than 1L but 2 US pints is a bit less than 1L
16:18:32 -!- xkapastel has joined.
16:20:11 <Taneb> Someone once taught me a rhyme, "A litre of water is a pint and three quarter"
16:20:44 <Taneb> I didn't realise the US pint was different
16:21:08 <kmc> that rhyme doesn't even rhyme very well
16:21:22 <kmc> a litre of wuarter
16:21:26 <Taneb> It rhymes almost perfectly to me
16:21:43 <Taneb> You must talk weirdly
16:21:52 <Taneb> (or, like, have a rhotic accent)
16:25:55 -!- Sgeo_ has joined.
16:29:13 -!- Sgeo has quit (Ping timeout: 245 seconds).
17:06:36 <adu> I live in the US and I have no idea what a pint is
17:13:01 -!- b_jonas has joined.
17:33:48 <b_jonas> I didn't much pay attention right there, and I don't have the bottles or photos of them anymore
17:37:20 <b_jonas> still no IOCCC source codes
18:16:12 <b_jonas> so the Giant says that the end of the sixth OotS book is in sight. and there will only be seven books. we must be two thirds ratio into the story by now.
18:16:38 <b_jonas> I presume the last book will be the thickest, because that's how these series usually go, but still.
18:18:58 <esowiki> [[ACL]] https://esolangs.org/w/index.php?diff=64815&oldid=64814 * Hanzlu * (-178)
18:21:17 <b_jonas> can you imagine living in a time when everyone knows OotS as an epic that is already complete, and we tell children about how we had to wait ten days (uphill both ways) for the next strip to appear, over and over again for each strip?
18:25:00 <b_jonas> although I guess we can already tell them about when Harry Potter wasn't yet complete
18:27:51 -!- ARCUN has joined.
18:28:47 <ARCUN> Ubuntu came out with the 19.04 version
18:28:58 <ARCUN> I almost installed 18.04
18:29:10 -!- ARCUN has left.
18:30:09 <b_jonas> and #esoteric is logged way back so we can even prove it
18:58:49 -!- lldd_ has quit (Quit: Leaving).
19:09:28 <arseniiv> <b_jonas> although I guess we can already tell them about when Harry Potter wasn't yet complete => was it too published strip by strip?
19:15:13 <b_jonas> arseniiv: no, but we had to wait for the last three books
19:31:45 <arseniiv> it would be quite interesting if Harry was originally a comic series
19:34:15 <b_jonas> dunno. that would make the books more expensive, I think, so it would get to fewer people
19:35:10 <b_jonas> the way they are, with books, I can have the complete story in seven books. in comics, I could only have slices.
19:36:25 <arseniiv> hm, there are some prose/comic hybrids out there, maybe it’s a good format
19:37:02 <b_jonas> I have Matilda by Roald Dahl on my bookshelf, but that one is short
19:37:51 <arseniiv> oh, I didn’t know that’s illustrated originally (I only have seen a film)
19:38:10 <b_jonas> I also have some of the Kästner books
19:38:32 <arseniiv> also, is it translated, I mean Matilda?
19:38:52 <b_jonas> there is a translation, and I've read it, but in this case, I have the original English version of Matilda on my shelf
19:39:01 <b_jonas> the Kästner books I only have in translation
19:39:32 <b_jonas> Matilda is one of the books I've met when I was very young, but only got the original more recently
19:41:07 <arseniiv> BTW I don’t like very much how it ends, “and she didn’t need to use her telekinesis almost ever”, is it a tad boring
19:41:27 <b_jonas> no no, it ends by Matilda _losing_ her telekinesis
19:41:36 <b_jonas> there's some speculation too on why
19:41:40 <b_jonas> but that's not even the important point
19:42:13 <b_jonas> the more important is that it ends by Matilda living happily ever after with her teacher Ms Honey in the house that she inherited, instead of with the parents who don't care much about her
19:42:46 <arseniiv> I understand what is it she didn’t have is a loving family, yeah, I agree it’s greater, but still
19:44:42 <arseniiv> it’s like there can only be one thing more important that all the others, and it doesn’t ring too true, even when I was a kid and saw the movie version the first time
19:45:51 <arseniiv> and I can also say if Matilda is okay with no superpowers, then so am I :D
19:46:18 <b_jonas> why wouldn't she be okay? she didn't ask for them anyway, and she was never dependent on them
19:50:41 <quintopia> @tell ais523 this new smb3 tas is even super cooler than last time thx
19:57:24 -!- xkapastel has quit (Quit: Connection closed for inactivity).
20:03:13 -!- Lord_of_Life_ has joined.
20:03:36 -!- Lord_of_Life has quit (Ping timeout: 272 seconds).
20:05:56 -!- Lord_of_Life_ has changed nick to Lord_of_Life.
21:51:46 -!- xkapastel has joined.
21:57:39 <HackEso> An union is the opposite of an ion.
21:57:43 <HackEso> 1288) <ais523> (btw, "q = 1-p" should be the standard definition of q, IMO)
22:51:47 -!- b_jonas has quit (Quit: leaving).
23:09:38 -!- MDude has quit (Ping timeout: 245 seconds).
23:24:24 <esowiki> [[ACL]] https://esolangs.org/w/index.php?diff=64816&oldid=64815 * Hanzlu * (+439)
23:30:52 <esowiki> [[ACL]] https://esolangs.org/w/index.php?diff=64817&oldid=64816 * Hanzlu * (+479)
23:31:01 -!- MDude has joined.
23:33:56 <esowiki> [[ACL]] https://esolangs.org/w/index.php?diff=64818&oldid=64817 * Hanzlu * (+32)
23:53:55 <esowiki> [[ACL]] https://esolangs.org/w/index.php?diff=64819&oldid=64818 * Hanzlu * (+118)