00:16:43 -!- ais523 has quit (Remote host closed the connection).
00:23:50 <b_jonas> "<ais523> does the `volatile` keyword do anything useful nowadays?" => probably, but you rarely want it, and you definitely don't want volatile for synchronization between threads or processes (or cpu threads at the lower level), you want C99 atomics or C++ atomics and all the higher level stuff for that, and it's not quite clear to me how you're supposed to do communication with a signal handler and
00:23:56 <b_jonas> whether volatiles are still relevant for that.
00:25:23 <b_jonas> yes, atomics are defined more explicitly, and it's important that atomics can do two things for one goal: they can forbid the compiler from reordering memory access, and they can forbid the CPU from reordering memory access (on modern cpus that do that)
00:26:39 <b_jonas> but what volatile is supposed to mean I have no clear idea
00:28:09 <kmc> I think at minimum it ensures the number of reads/writes at the assembly level is the same as at the source level
00:28:28 <kmc> disabling optimizations such as hoisting a load out of a loop
00:28:30 <kmc> which is important if reads/writes have side effects
00:29:15 <kmc> as ais523 pointed out, this means little to nothing on a modern out of order, cached, possibly SMP system
00:29:22 <kmc> but it's still very meaninful for microcontrollers
00:29:53 <b_jonas> "in which case, you use volatile sig_atomic_t to specify that the flag should be written in a single machine instruction" => perhaps, but it's not clear if this actually still works in modern compilers. I mean, it made sense in old barely-optimizing compilers to just have a type synonym for a type that's as wide as the typical write instructions, so you don't try to use a 32-bit int on a cpu where all
00:29:59 <b_jonas> 32-bit access will be implemented as two 16-bit accesses. but these days, if you want to guarantee that a value is written as a whole, that's what https://en.cppreference.com/w/c/atomic/ATOMIC_LOCK_FREE_consts and https://en.cppreference.com/w/cpp/atomic/atomic_is_lock_free is for.
00:30:10 <kmc> or memory mapped IO even on full modern systems (which would be in a special region designated as uncached)
00:31:36 <kmc> did I tell y'all I got a FPGA board? https://www.sparkfun.com/products/16526
00:31:44 <kmc> and it's supported by an open source toolchain
00:32:25 <kmc> and nMigen, which is a Python EHDL (is that a reasonable contraction of EDSL HDL?)
00:32:46 <kmc> so far i only did some simple demos with it
00:33:21 <kmc> got distracted by other things... carpentry and mushroom and plant growing projects and life stuff
00:33:25 <kmc> but i will go back to it soon
00:34:16 <b_jonas> ais523: if all you do in the signal handler is to set a flag, then I think relaxed atomics are fine. that means the write to that flag can be ordered in an unexpected way, but you do this for asynchronious signals, which can be delayed anyway. there's a way to force the kernel to deliver the signal handler NOW (as in before the next statement is executed)with sigsuspend, but if you do that, you won't
00:34:22 <b_jonas> have a signal handler that just sets a flag.
00:35:00 <b_jonas> if you want to do more than set a flag or _exit in your signal handler, then it's very likely that relaxed atomics aren't enough.
00:35:19 <kmc> maybe i will implement the RP2040 PIO architecture in nMigen
00:35:50 <b_jonas> as far as I understand, the good usecase for relaxed atomics is global counters that you very rarely increment, so you don't want to set up per-thread counters, but you want an exact total in them even in the rare and slow case when two threads increment it at the same time.
00:39:54 <b_jonas> ais523: re AVX1, it hasn't been around for long enough, there are still cpus without AVX in use, but your point still stands because all x86_64 cpus have SSE2 (even though the Intel manual is careful to specify everything as if that need not be true), so there's no reason to use anything older than SSE2 for floating point.
00:41:24 <esowiki> [[Parse this sic]] https://esolangs.org/w/index.php?diff=80187&oldid=80186 * Digital Hunter * (+18) Undo revision 80186 by [[Special:Contributions/Digital Hunter|Digital Hunter]] ([[User talk:Digital Hunter|talk]])
00:41:46 <b_jonas> ais523: I don't think what you're saying is true. if you're in a code with heavy memory access, then accessing data that crosses a cache line boundary (every 16 bytes) can slow your code down. it's not just page boundaries.
00:42:15 <b_jonas> this applies if you're doing a lot of access to memory that's already cached, not if you're accessing main memory once that the cache can never reuse.
00:42:48 <b_jonas> but there are lots of pieces of code that want to do this, accessing memory already in the L1 cache multiple times.
00:52:32 <esowiki> [[Parse this sic]] M https://esolangs.org/w/index.php?diff=80188&oldid=80187 * Digital Hunter * (+42) /* Commands and keywords */
01:04:24 <esowiki> [[Parse this sic]] https://esolangs.org/w/index.php?diff=80189&oldid=80188 * Digital Hunter * (+408) /* Reverse cat */ Added the non-terminating example I was hoping to create. Yippee
01:15:38 -!- arseniiv has quit (Ping timeout: 246 seconds).
01:24:33 -!- rain1 has quit (Quit: WeeChat 3.0).
01:24:54 <esowiki> [[User talk:Bo Tie]] N https://esolangs.org/w/index.php?oldid=80190 * JonoCode9374 * (+193) Created page with "I think your userpage is epic. ~~~~"
01:38:30 <esowiki> [[Parse this sic]] M https://esolangs.org/w/index.php?diff=80191&oldid=80189 * Digital Hunter * (+120) /* Commands and keywords */
01:40:27 <esowiki> [[Parse this sic]] https://esolangs.org/w/index.php?diff=80192&oldid=80191 * Digital Hunter * (+51) /* Commands and keywords */
01:49:12 <esowiki> [[Parse this sic]] M https://esolangs.org/w/index.php?diff=80193&oldid=80192 * Digital Hunter * (+130) /* Numbers */
01:51:50 -!- imode has joined.
01:52:05 -!- imode has quit (Client Quit).
01:52:24 -!- imode has joined.
02:28:24 <esowiki> [[Rubic]] https://esolangs.org/w/index.php?diff=80194&oldid=75604 * Digital Hunter * (+108) /* Example programs */
02:53:30 -!- ubq323 has quit (Quit: WeeChat 2.3).
02:58:55 <esowiki> [[Trivial]] N https://esolangs.org/w/index.php?oldid=80195 * Hakerh400 * (+14709) +[[Trivial]]
02:59:38 <esowiki> [[Language list]] https://esolangs.org/w/index.php?diff=80196&oldid=80149 * Hakerh400 * (+14) +[[Trivial]]
02:59:59 <esowiki> [[User:Hakerh400]] https://esolangs.org/w/index.php?diff=80197&oldid=80108 * Hakerh400 * (+14) +[[Trivial]]
03:11:40 <esowiki> [[Trivial]] M https://esolangs.org/w/index.php?diff=80198&oldid=80195 * Hakerh400 * (+0)
03:16:54 -!- zzo38 has quit (Ping timeout: 265 seconds).
03:52:44 -!- zzo38 has joined.
03:53:19 -!- zzo38 has quit (Remote host closed the connection).
04:11:58 -!- MDude has quit (Quit: Going offline, see ya! (www.adiirc.com)).
04:39:20 -!- ais523 has joined.
04:40:40 -!- ais523 has quit (Remote host closed the connection).
04:40:52 -!- ais523 has joined.
04:40:59 <ais523> <ais523> oh wow, so it turns out that if you pass MAP_NORESERVE to mmap (to tell it that it can find physical memory lazily as you write to your virtual memory, and don't need a guarantee that physical memory is available)
04:41:05 <ais523> <ais523> Linux lets you allocate some really ridiculous amounts of memory, I managed 35 TB in a single block (with almost that much allocated in other blocks)
04:41:42 <ais523> I was hoping it would do that, it means that you can (in effect) use very large MAP_NORESERVE mmaps as a method of reserving address space
04:43:17 <shachaf> I remember that Linux has a reserve vs. commit distinction, just like Windows, but no one actually uses it.
04:43:48 <shachaf> So you can write Linux programs that reserve address space and commit it as necessary, and they run fine without overcommit. But almost no program does that because overcommit is so pervasive.
04:44:05 <shachaf> And it's not even possible to ask the kernel what a process's committed memory usage is. It's not anywhere in /proc.
04:45:06 <shachaf> I think the trick is something like mapping pages PROT_READ or PROT_NONE so your process doesn't get charged for them.
04:45:31 <ais523> I was using MAP_NORESERVE | MAP_READ | MAP_WRITE for mine, that seemed to work
04:45:50 <ais523> also, I think /proc/$$/smaps might have the information you're looking for (although not directly)
04:46:11 <shachaf> But you want to be able to make a big mapping and then gradually commit it as you use more memory.
04:46:34 <shachaf> Hmm, I think I looked in smaps and didn't find it.
04:46:44 <shachaf> But maybe I only looked in status?
04:47:12 <shachaf> I don't remember anymore. It would be nice if it was possible.
04:47:53 <ais523> you can use madvise, or flags to mmap, to actually load physical pages to back your address range
04:48:03 <kmc> I thought Linux will (by default) overcommit allocations even without special mmap flags
04:48:04 <ais523> but normally you just let the kernel do it lazily
04:48:04 <shachaf> If you disable overcommit, the OOM killer should be irrelevant, right?
04:48:28 <kmc> ais523: see also mlock() and mlockall()
04:48:38 <shachaf> I don't really like the Linux culture of overcommit-and-pray.
04:48:48 <ais523> hmm, I wonder whether madvise(MADV_WILLNEED) on large blocks of memory is faster than just directly reading them and letting the kernel handle the pagefault
04:48:57 <ais523> presumably, pagefaults have some overhead as you switch to the kernel and back again
04:49:51 -!- ais523 has quit (Quit: sorry for my connection).
04:50:03 -!- ais523 has joined.
04:50:31 <ais523> <shachaf> If you disable overcommit, the OOM killer should be irrelevant, right? ← sort-of; you still get processes failing randomly but now it's the process that can't allocate memory, as opposed to the process the kernel chooses to pick on
04:50:48 <ais523> because very few applications have any sensible codepath for the out-of-memory situation
04:50:52 <shachaf> That's not random, that's a process asking for memory in a well-defined place and failing.
04:51:12 <shachaf> I guess that's true, a lot of programs are buggy.
04:51:20 <ais523> the process that dies is the next process that tries to allocate memory, which might not be the process responsible for the problem
04:51:44 <ais523> say you have a program that makes intermittent large allocations and it's using up basically all of memory
04:51:51 <ais523> and a program that is using less memory but makes lots of small allocations
04:52:02 <ais523> it is quite possibly the latter program that will hit an OOM situation first
04:52:13 -!- ais523 has quit (Client Quit).
04:52:25 -!- ais523 has joined.
04:52:34 <shachaf> At least with this system people who write programs have a chance of making them work well.
04:52:50 <ais523> I'm interested in why you think failing to handle OOM is a bug
04:53:25 <ais523> IMO, exiting in response to an OOM situation is usually correct (or possibly killing the process that's responsible for the memory, if it's not you)
04:53:31 <shachaf> Well, it's not a bug in every program, some programs just can't do anything.
04:54:07 <ais523> there is also the question of, should the OS start swapping under heavy memory pressure?
04:54:19 <shachaf> But some programs can behave well. Maybe clearing a cache they have, or exiting gracefully.
04:54:31 <ais523> disk has much higher capacities than memory on most systems
04:54:51 <shachaf> I just want to be able to write a reliable program that uses memory -- maybe even without allocating at all after startup -- and doesn't fail.
04:55:34 <ais523> in practice, I think I've seen an actual memory exhaustion only once, all the other times a program leaked more memory than the computer had, it basically ended up using the hard disk as a replacement for memory
04:55:58 <ais523> which of course makes the system unusably slow, which is why the point of memory exhaustion is rarely reached in practice
05:00:00 -!- Deewiant has quit (Ping timeout: 256 seconds).
05:01:45 <ais523> if your program doesn't allocate at all after startup, I don't see why the OOM-killer would pick on it
05:01:45 <shachaf> I guess the concern might be, maybe a program is using all of memory, and then other programs can't even start up, so you can't log in and kill the big program.
05:01:45 <shachaf> But the OOM killer doesn't seem like that great a solution.
05:01:45 <ais523> the basic question is "what do you do when there's no more left of a shared resource?"
05:01:45 <shachaf> Well, for one, maybe it allocates from the kernel's perspective, even if it doesn't from its own.
05:01:45 <shachaf> Because at startup it mmaps 16GB of memory to use for its computations, and it doesn't fault it all right away.
05:01:45 <ais523> Linux's default config won't let you allocate substantially more memory space than the computer has physical memory, even if you don't prefault it (unless you specify MAP_NORESERVE)
05:01:45 <ais523> you can go a little over, but not that much
05:01:45 <shachaf> Hmm, I don't think that's true.
05:01:45 <shachaf> GHC's runtime maps 1TB at startup now, I think?
05:01:45 <ais523> I both read it in the documentation, and tested it a few tens of minutes ago
05:01:45 <ais523> presumably the very large maps are using MAP_NORESERVE
05:01:45 <shachaf> Oh, interesting, maybe I'm just wrong on that and everyone uses NORESERVE.
05:01:45 <ais523> actually, now I'm vaguely curious as to why the pagetables don't end up filling most of memory when you do that, perhaps they can be deduplicated or initialized lazily or something like that
05:01:45 <shachaf> You don't need anything to be in actual page tables, right?
05:01:45 <shachaf> You can just store a big interval in the kernel and allocate the memory when addresses in that interval are faulted.
05:01:45 <ais523> oh right, you can access an address that isn't in the page tables at all and you just get a page fault
05:01:45 <ais523> which the kernel can handle by creating a page table
05:01:45 <ais523> so the maps only need to exist wihtin the kernel
05:02:19 -!- Deewiant has joined.
05:02:59 <b_jonas> ais523: you *can* allocate large amounts of memory that way, but I still think it's a bad idea to implement malloc, because you put more hidden performance costs on the kernel that has to manage that address space than you'd have in a more traditional malloc implementation. It's a good esoteric experiment though.
05:03:20 <ais523> I guess an interesting compromise would be for the OS to decide on a physical address that should back a particular piece of memory, but not actually clear it out or set up the pagetables until it's used
05:03:22 <shachaf> Interesting, I thought NORESERVE was the default behavior in Linux until now.
05:03:27 <ais523> so it can use the physical memory for storing caches until then
05:03:33 <shachaf> (With overcommit_memory set to 0.)
05:04:06 <ais523> b_jonas: I'm expecting it to be more efficient, rather than less efficient, because of fewer system calls
05:04:09 <ais523> the page faults happen either way
05:04:10 <shachaf> It would be nice to be able to ask, from a program, to actually really for real have the memory.
05:04:29 <shachaf> Writing to every page is probably enough?
05:04:39 <ais523> overcommit_memory = 1 will noreserve everything; overcommit_memory = 2 will refuse to overcommit at all
05:05:06 <ais523> mlocking is limited at 64 MB by default (although root can increase the limit at will)
05:05:21 <ais523> I think it makes sense that there's a limit for that
05:05:36 <shachaf> The default mlock limit is much higher than it used to be.
05:05:44 <shachaf> max locked memory (kbytes, -l) 4062728
05:06:02 <ais523> maybe you have more physical memory than I do
05:06:12 <ais523> or one of us has it set to a non-default value somehow
05:06:17 <shachaf> I feel like it used to be 64 kB or something.
05:06:30 <ais523> max locked memory (kbytes, -l) 65536
05:06:38 <shachaf> Hmm, I have 32 GB of physical memory, using Ubuntu, Linux 5.8.0.
05:07:09 <ais523> I have a lot less physical memory than you do, and am on Linux 5.4
05:08:05 <ais523> anyway, part of the reason I was looking at this is that I'm considering creating a new executable loader, and was considering possible patterns for allocating the virtual address space
05:08:26 <ais523> one possibility was to manage virtual memory reservations in userspace
05:08:58 <ais523> you could very efficiently do it statically, because virtual memory is so large that you can just divide it up evenly between every library that cares and they'll all have enough
05:09:35 <b_jonas> ais523: the page faults happen either way, but now the kernel has to manage a lot of administration structures to follow what is mapped where and set up page tables correctly, plus since the actual use is sparse, it can't use large pages, so the cpu has to work harder with paging table lookups too.
05:09:42 <ais523> one vision I have is for programs to be able to use multiple memory allocators without them treading on each others' toes, and to have a unified free() which can free from any of them
05:10:06 <ais523> b_jonas: by default, the kernel never uses large pages
05:10:17 <ais523> unless userspace requests it
05:10:19 <shachaf> Don't you typically know how memory was allocated when you free it?
05:10:52 <b_jonas> ais523: I think it does use large pages these days on modern kernels. and even if it doesn't, an mmap implementation that allocates everything to a *dense* (non-sparse) region, it can request large pages.
05:10:57 <ais523> shachaf: often but not always, unless you have extra variables tracking it
05:11:12 <b_jonas> `` ulimit -l # unit is kilobytes
05:11:26 <ais523> a good example is functions that return either a string literal or an allocated string
05:11:27 <shachaf> I feel like supporting multiple allocators is tricky, because many allocators don't have the same interface.
05:11:45 <ais523> well, the aim would be to define a standard interface for allocators
05:11:46 <shachaf> If you use an arena allocator, you don't want to walk your entire data structure and call free() on each node. You want to avoid walking it at all.
05:11:56 <ais523> C already has one (malloc/calloc/realloc/free), but it kind-of sucks
05:12:20 <ais523> oh, I meant in terms of general-purpose allocators; arena allocators often don't support frees at all
05:12:29 <ais523> you free the arena, not the nodes
05:12:39 <shachaf> Right, I meant free() would be a no-op (with a standard interface).
05:13:08 <ais523> but take the example of, say, asprintf
05:13:21 <ais523> (which returns malloc'ed memory of the size of the string)
05:13:33 <b_jonas> ais523: I don't think that's a very good idea. the point is, we want to use sized allocators in programs that allocate a lot of small nodes on the heap, that is, allocators where the free function knows what size and alignment (and other parameters) were passsed to the allocate call, because this lets you allocate the small nodes with less memory overhead than when everything has to be tagged by at
05:13:41 <ais523> that kind-of assumes there's a global allocator, because you don't want to need to have a matching asprintf_free
05:14:11 <ais523> b_jonas: so this is something I've been thinking about a lot
05:14:18 <shachaf> Well, most of the time mallocing individual strings like that isn't so great anyway.
05:14:34 <b_jonas> admittedly you prefer not to allocate a lot of small nodes, or if you do, you want to allocate them from a pool specific to the structure with context about that structure for free, but the latter exactly means no single free interface without parameters
05:14:48 <ais523> is the correct malloc/free interface: a) the allocator supports an efficient API to ask about the size of allocated memory, so that the program doesn't have to track it; or b) free takes an argument for the size of the thing you allocated, so that the allocator doesn't have to track it?
05:15:05 <shachaf> free should take an argument for the size.
05:15:25 <b_jonas> ais523: I think you want both kinds
05:15:40 <b_jonas> and also allocators that take a pool argument that you have to pass to free too
05:15:43 <shachaf> Also, the most flexible realloc interface is kind of complicated, I think.
05:15:44 <ais523> currently most people track in both places which is just ridiculous overhead
05:15:45 <b_jonas> various different allocators
05:16:06 <shachaf> For example, you might want to give realloc two different possible sizes, one if it can grow in place and one if it can copy.
05:16:10 <b_jonas> especially ones that serve my pet peeve, pool allocators that let you use 32-bit pointers or indexes into a poool
05:16:43 <shachaf> And you might want to ask, with a malloc-style interface, "what's the actual size of the allocated memory?", since it might be bigger than what you asked for, and you might be able to use that.
05:16:45 <ais523> shachaf: I'm beginning to wonder whether "grow in place" is something that's worth optimising for at all
05:17:27 <ais523> b_jonas: anyway, one thing that crossed my mind is that if you're userspace managing the complete address space, you have control over what all the bits of a pointer mean, and, e.g. can encode the arena number in some of them
05:17:34 <shachaf> Well, not supporting realloc at all is another option, of course.
05:17:36 <ais523> or even the size of the allocation
05:17:39 <b_jonas> also I'd like a sized allocator where alloc and free takes four size paramters, not just two: the size, the alignment, how many bytes you want readable without a segfault but with arbitrary content before the allocated region, and how many bytes you want readable after. and I want a pony.
05:18:16 <b_jonas> ais523: yes, you can do that too in an interpreter. but you have to be careful so it doesn't slow down dereferencing too much.
05:18:24 <ais523> shachaf: no, realloc is still helpful for large allocations, *but* if they're large enough to use mmap then the mremap doesn't require any copying behind the scenes, just a pagetable update
05:19:00 <shachaf> In that sort of situation maybe you're better off reserving the entire size you might need upfront, and committing it as necessary.
05:19:03 <shachaf> So the address doesn't change.
05:19:10 <b_jonas> ais523: some interpreters, like ruby 1.8 and some lisp or prolog interpreters, already do this by using a tag bit that makes the pointer not a pointer but an integer.
05:19:38 <ais523> b_jonas: I've had further thoughts about your four-argument alloc: on 64-bit systems, just hardcode the readable-before and readable-after arguments to something large like 4GB, the extra argument passing is going to slow it down way more than not being able to use the very ends of the address space
05:20:15 <b_jonas> ais523: I don't need as much as 4GB, but sure
05:20:44 <shachaf> I feel like you need at most 64 bytes before and after.
05:20:54 <b_jonas> shachaf: no, I want a full row of the pixmap
05:21:10 <ais523> my point is that providing a huge amount readable on both sides is very cheap on 64-bit processors
05:21:20 <b_jonas> because I want to reference the point above the currently iterated one
05:21:40 <b_jonas> ais523: yes, you have a point that constants instead of an argument make sense.
05:21:42 <ais523> although, I think some people who use malloc would prefer to have unreadable data around the allocation to help them diagnose accidental read-out-of-bounds
05:22:23 <shachaf> Ah, I remember an allocator that had an option for putting every allocation at the end of a page (or at the beginning).
05:22:43 <ais523> the TLB would hate that :-D
05:23:21 <shachaf> Ah, this was it: https://ourmachinery.com/post/virtual-memory-tricks/
05:23:26 <b_jonas> also I heard an anecdote that (an older version of) Sicstus prolog used tag bit(s) in the pointers, but ran into trouble because it used *high* bits instead of low bits as tag, which was fine at the start but later when people started to have a gigabyte or more memory on 32-bit machines, it turned out to be not such a great design after all
05:24:23 <ais523> AMD went to specific pains to try to stop people doing that when designing x86-64, for the reason you mention, and yet apparently some people are doing it anyway
05:24:50 <b_jonas> ais523: this was back in the 32-bit era
05:25:01 <ais523> this despite the fact that 48-bit pointers have only just started to not be enough
05:25:32 <ais523> (Intel have stated plans to expand the pointer size to 57 bits, but AFAICT haven't yet released any processors with that size of pointer)
05:25:45 <shachaf> Well, maybe x86 will be dead by the time people use 72 TB of address space.
05:25:55 <b_jonas> ais523: haven't they already released on a few years ago? I'm not sure really
05:25:56 <esowiki> [[User:Language]] https://esolangs.org/w/index.php?diff=80199&oldid=80152 * Quadril-Is * (+1044) Program that pushes 72
05:26:03 <shachaf> Oh, I thought several CPUs already used 56-bit addresses.
05:26:37 <ais523> Linux has support for them added already, but it tends to add support for processor features before the actual processor is released
05:27:07 <ais523> (57-bit x86-64, that is)
05:27:24 <ais523> page tables are 512 entries long on x86-64, so the pointer sizes go up 9 bits at a time
05:27:34 <shachaf> "The extension was first implemented in the Ice Lake processors,[2] and the 4.14 Linux kernel adds support for it.[3]"
05:27:48 <ais523> I haven't heard of Ice Lake
05:28:14 <ais523> OK, that's recent enough that I'm not surprised I missed it
05:28:28 <shachaf> Apparently new Intel chips are using 12-way set associative L1D caches.
05:29:12 <shachaf> Maybe because (apparently) with their VIPT cache design, the cache size is the number of ways * the page size, so the only way to grow the cache is to increase the number of ways.
05:29:51 <b_jonas> I thought that was impossible
05:30:18 <b_jonas> shachaf: and yes, that's the problem with x86, no way to guarantee that ALL pages on the system will be larger than 4k sized
05:30:29 <b_jonas> so the L1 cache can only be 32k
05:30:37 <ais523> growing the number of sets by a factor X would give you a cache that requires less space on the chip, but would be more likely to evict things due to set collisions, compared to growing the number of ways by a factor X
05:31:03 <shachaf> But they can't just grow the number of sets.
05:31:05 <b_jonas> it makes sense, it's just one of the sad realities we have to face because of historical binary compatibility
05:31:06 <ais523> b_jonas: in theory there's no reason why the L1 cache and page size should have anything to do with each other, although I gather that Intel have some sort of design that links them
05:31:33 <shachaf> ais523: I think it's the natural thing with VIPT caches, which I think are very standard.
05:31:47 <shachaf> Though I think some people have gotten around it with trickery.
05:32:08 <ais523> the L1 cache is caching virtual addresses, it's the TLB that caches virtual→physical correspondences
05:32:46 <b_jonas> ais523: I think there is a good reason, in that you want the L1 cache to have very low latency, as in just a few cycles (otherwise it's an L2 cache, not an L1 cache; and also ideally the ability to do two simultaneous reads), and for that you want to pick the cache line before the physical address physically arrives from the TLB cache
05:33:01 <shachaf> Yes, but in order to get cache lookups fast enough, you want to start doing the lookup in parallel with TLB translation.
05:33:10 <shachaf> So you can only use virtual bits of the address for it.
05:33:24 <b_jonas> ais523: no, afaiu the L1 cache is caching physical addresses. it has to, because the process can write the same memory mapped at two different virtual addresses
05:33:25 <ais523> b_jonas: L1 cache typically works purely off the virtual address for that reason
05:33:33 <ais523> it's L2 and L3 that work off physical addresses
05:33:53 <b_jonas> it has to determine the cache line from virtual address, but then verify that the physical address matches or else it can produce incorrect results
05:33:58 <ais523> hmm, maybe this is one of those Intel versus AMD decisions?
05:34:07 <shachaf> I don't know of any x86 CPUs using VIVT sorts of L1 caches.
05:34:09 <b_jonas> at least for writable memory
05:34:16 <shachaf> Which I think is what you're describing?
05:34:27 <shachaf> I mean L1D, maybe L1I is different.
05:34:30 <b_jonas> maybe the L1C cache works with virtual addresses, because L1C can afford to be very slow and flush everything when a cached page is written
05:34:47 <ais523> part of the problem is that the information about this that you find online has highly varying ages which often aren't clear
05:34:56 <b_jonas> ais523: I don't think it's an intel vs AMD thing
05:35:24 <shachaf> I asked about this on Twitter and it turned into a long thread with a hundred replies from some people who have more of an idea than I do.
05:35:27 <ais523> yes, L1C shouldn't be expecting writes at all, and I think it's generally accepted that a write to code memory is one of those things that can reasonably cause a full pipeline stall
05:35:33 <shachaf> But my conclusion was that it's pretty complicated.
05:35:45 <b_jonas> but I admit I don't really understand this, so all I'm saying is just guesses that you shouldn't trust
05:35:58 <b_jonas> ais523: accepted and well documented
05:36:26 <ais523> b_jonas: yes, I mean it's documented, but people also agree that this is a decision that should have been made
05:36:27 <b_jonas> the only reason x86 even has to _detect_ writes to cached code pages is for historical compatibility with 386
05:36:36 <ais523> whereas some things are documented but look bizarre
05:37:36 <b_jonas> like the thing where intel and AMD recommends different instructions as multibyte NOPs. if they can agree on all the instruction set, why can't they agree on that? sure, their instruction decoders are very different, but still
05:38:05 <b_jonas> couldn't they agree on something that's fast on both brands?
05:38:50 -!- S_Gautam has joined.
05:38:51 <ais523> I think there's a "core" of NOP options which should be fast on both, but it only gets you up to 10 bytes or so
05:39:34 <b_jonas> right, but you want NOPs up to 15 byte long for padding
05:39:49 <ais523> like, it's perfecly legal to put 5 CS: prefixes on a NOP, and Intel and AMD processors will decode this, but the decoders don't like it so neither processor manufacturer recommends you do that
05:40:05 <ais523> I think 5, might be limited to 4, I can't remember
05:40:08 <b_jonas> yes, this is about efficient NOPs, not valid nops
05:40:39 <ais523> having stared at instruction encodings for several days now, I'm pretty sure that 66 logically "should" be the fastest prefix
05:40:59 <b_jonas> anyway, this is an interesting conversation but I really ought to sleep
05:41:06 <ais523> followed by F2/F3, but F3 NOP already means something else
05:42:05 <ais523> this is the sort of sequence that would often be repurposed as a nop with side effects
05:42:53 <ais523> `asm .byte 0xf3, 0x0f, 0x1e, 0xfa
05:43:14 <ais523> does that fit the NOP encoding? it's meant to be backwards-compatible as a NOP
05:43:40 <ais523> ah no, NOP would be 0F 1F
05:44:11 <ais523> maybe it's an undocumented 8-bit NOP
05:44:22 <ais523> `asm .byte 0x0f, 0x1e, 0xfa
05:44:49 <ais523> noe I'm really confused
05:46:05 <ais523> FA is 11 / 111 / 010, so that's "direct register access, R=7, B=2"; B is used as the input for a 1-argument instruction so 2 = %edx makes sense
05:46:47 <ais523> but R is set to 7 when it should be 0 according to the documentation, and the LSB of the opcode is 0 when it should be 1 according to the documentation
05:47:21 <ais523> probably Intel is hanging on to a whole 15 undocumented NOP combinations so that they can allocate them for instructions that need to retroactively become NOPs
05:47:43 <ais523> err, backwards-compatibly be treated as NOPs
05:47:57 <HackEso> /tmp/asm.s: Assembler messages: \ /tmp/asm.s:1: Error: operand size mismatch for `nop' \ /tmp/asm.s: Assembler messages: \ /tmp/asm.s:1: Error: operand size mismatch for `nop'
05:47:58 <shachaf> `asm .byte 0x0f, 0x1e, 0072
05:48:16 <shachaf> `asm .byte 0x0f, 0x1e, 0002
05:48:22 <shachaf> `asm .byte 0x0f, 0x1e, 0302
05:49:13 <shachaf> I think i misremembered how modr/m bytes work.
05:49:56 <ais523> top two bits are an enum that specify a) whether there's a memory access involved (only 11 doesn't access memory), b) whether there's a constant being added to the memory address and if so how many bytes it's written as
05:50:25 <ais523> next three bits are R, which is a register argument to the instruction (always a register) if it takes 2 or more arguments, and part of the opcode if it takes only 1 argument
05:50:32 <shachaf> Right, I confused 11 with 00.
05:50:51 <ais523> bottom three bits are usually B, which is also a register argument to the instruction, and always used
05:51:04 <ais523> but the values of 101 and 100 are special cases
05:52:06 <ais523> 101 means that there's a SIB byte, used to specify more complicated addressing (it corresponds to %esp, so you can't read from stack memory without a SIB byte)
05:52:47 <ais523> 100 normally means %ebp, but the special case of 00 / xxx/ 100 means that there's no register at all, it's using a 32-bit immediate as the address instead
05:53:05 <ais523> so if you want an access via %ebp you always have to explicitly give an offset from it
05:53:10 <shachaf> I wrote an encoder for all this a couple of years ago, but clearly the details have slipped my memory.
05:53:20 <HackEso> 0: 67 0f 1f 03 nopl (%ebx)
05:53:30 <HackEso> 0: 0f 1f 45 00 nopl 0x0(%rbp)
05:53:33 <HackEso> 0: 0f 1f 04 24 nopl (%rsp)
05:53:55 <shachaf> %rsp also corresponds to r12 or r13 or so, which has the same encoding issue.
05:54:22 <ais523> because the fourth bit of R and B is in the REX prefix, not part of the ModRM byte
05:55:05 <ais523> `asm rex.x nopl (%rax)
05:55:06 <HackEso> 0: 42 0f 1f 00 rex.X nopl (%rax)
05:55:29 <ais523> this is the one case of encoding that confuses me
05:55:37 <ais523> `asm rex.x nopl (%rax, %r12, 1)
05:55:39 <HackEso> /tmp/asm.s: Assembler messages: \ /tmp/asm.s:1: Error: same type of prefix used twice \ /tmp/asm.s: Assembler messages: \ /tmp/asm.s:1: Error: no such instruction: `nopl (%rax,%r12,1)'
05:55:44 <ais523> `asm nopl (%rax, %r12, 1)
05:55:45 <HackEso> 0: 42 0f 1f 04 20 nopl (%rax,%r12,1)
05:55:47 <esowiki> [[Esolang:Sandbox]] https://esolangs.org/w/index.php?diff=80200&oldid=80161 * Quadril-Is * (+9) /* Something */
05:56:07 <esowiki> [[Esolang:Sandbox]] https://esolangs.org/w/index.php?diff=80201&oldid=80200 * Quadril-Is * (+0) /* Something */
05:56:22 <ais523> normally %rsp as X is used to write a null SIB byte that does nothing (this is only useful in the case when you want %rsp as B, as far as I can tell, or to pad out space)
05:56:39 <ais523> but %r12 as X is *not* a special case, it actually uses %r12
05:56:46 <ais523> `asm .byte 0x0f, 0x1f, 0x04, 0x20
05:56:47 <HackEso> 0: 0f 1f 04 20 nopl (%rax,%riz,1)
05:57:05 <ais523> huh, %riz, that's a new one (must be "integer zero")
05:57:37 <shachaf> Yes, I remember this. I think the assembler doesn't even accept it as input.
05:58:11 <HackEso> /tmp/asm.s: Assembler messages: \ /tmp/asm.s:1: Error: bad register name `%riz' \ /tmp/asm.s: Assembler messages: \ /tmp/asm.s:1: Error: no such instruction: `nopl (%rax,%riz,1)'
05:58:22 <shachaf> `asm .byte 0x0f, 0x1f, 0004, 0240
05:58:23 <HackEso> 0: 0f 1f 04 a0 nopl (%rax,%riz,4)
05:59:02 <ais523> so ModRM+SIB bytes of 00xxx100 00100yyy and the single ModRM byte 00xxxyyy are identical in *almost* every context
05:59:08 <ais523> except when you have a rex.x prefix
05:59:36 <ais523> I hate this sort of special case, because i'm hoping to have a domain-specific language for instruction encoding and this sort of thing just blows it up
06:00:07 <HackEso> /tmp/asm.s: Assembler messages: \ /tmp/asm.s:1: Error: invalid character '(' in mnemonic \ /tmp/asm.s: Assembler messages: \ /tmp/asm.s:1: Error: invalid character '(' in mnemonic
06:00:17 <HackEso> /tmp/asm.s: Assembler messages: \ /tmp/asm.s:1: Error: expecting scale factor of 1, 2, 4, or 8: got `' \ /tmp/asm.s: Assembler messages: \ /tmp/asm.s:1: Error: no such instruction: `nopl (%rax,,1)'
06:00:35 <HackEso> 0: 0f 1f 04 05 00 00 00 00 nopl 0x0(,%rax,1)
06:00:43 <ais523> what an inconsistent syntax :_D
06:02:00 <ais523> also I didn't even realise whitespace was significant there
06:02:00 <shachaf> I wonder why they introduced riz.
06:02:14 <shachaf> Whitespace is significant?
06:02:21 <shachaf> Oh, those are different errors.
06:02:23 <HackEso> /tmp/asm.s: Assembler messages: \ /tmp/asm.s:1: Error: invalid character '(' in mnemonic \ /tmp/asm.s: Assembler messages: \ /tmp/asm.s:1: Error: invalid character '(' in mnemonic
06:02:48 <ais523> I have been tempted to invent my own asm syntax, with = signs after output arguments
06:02:56 <ais523> so that I don't keep forgetting which way the arguments go
06:03:09 <shachaf> I'm used to AT&T syntax but I should probably switch to Intel syntax.
06:03:19 <ais523> would also help differentiate between the two encodings of register-register MOV
06:03:20 <shachaf> Since that way I can just read the Intel manual.
06:03:53 <shachaf> I don't like the whole "dword ptr [...]" thing in Intel syntax.
06:04:30 <ais523> `asm .byte 0x8b, 0xc1, 0x89, 0xc8
06:04:32 <HackEso> 0: 8b c1 mov %ecx,%eax \ 2: 89 c8 mov %ecx,%eax
06:04:56 <ais523> there are probably no processors where this difference matters, but it still feels wrong that you can't specify
06:05:35 <ais523> I like the way AT&T syntax gives instructions length suffixes, but remembering the suffixes is hard
06:05:45 <shachaf> I might just use mov64 and so on.
06:05:56 <ais523> yes, I think that's an improvement
06:06:03 <shachaf> That's what I did in my C library.
06:06:11 <ais523> or logarithms, mov3 for bytes, mov4 for words, mov5 for dwords, mov6 for qwords
06:06:45 <shachaf> `` echo 'long foo(long x) { return x; }' | gcc -x c /dev/stdin -o /tmp/foo.o && objdump -d /tmp/foo.o | grep mov
06:06:48 <HackEso> /usr/bin/ld: /usr/lib/gcc/x86_64-linux-gnu/8/../../../x86_64-linux-gnu/Scrt1.o: in function `_start': \ (.text+0x20): undefined reference to `main' \ collect2: error: ld returned 1 exit status
06:06:51 <ais523> the better reason to put the width on the opcode, though, is then you can stop changing the name of a register every time you access it with a different width
06:06:55 <shachaf> `` echo 'long foo(long x) { return x; }' | gcc -c -x c /dev/stdin -o /tmp/foo.o && objdump -d /tmp/foo.o | grep mov
06:06:56 <HackEso> 1:48 89 e5 mov %rsp,%rbp \ 4:48 89 7d f8 mov %rdi,-0x8(%rbp) \ 8:48 8b 45 f8 mov -0x8(%rbp),%rax
06:07:09 <shachaf> `` echo 'long foo(long x) { return x; }' | gcc -Os -c -x c /dev/stdin -o /tmp/foo.o && objdump -d /tmp/foo.o | grep mov
06:07:11 <HackEso> 0:48 89 f8 mov %rdi,%rax
06:07:17 <shachaf> `` echo 'long foo(long x) { return x; }' | clang -Os -c -x c /dev/stdin -o /tmp/foo.o && objdump -d /tmp/foo.o | grep mov
06:07:18 <HackEso> /hackenv/bin/`: line 5: clang: command not found
06:07:36 <ais523> shachaf: IIRC gcc and clang use the same assembler as each other, at least on Linux, so you'll get the same output
06:07:52 <shachaf> clang doesn't use its own assembler?
06:08:27 <ais523> ah no, llvm-as works on LLVM bitcode, not x86-64 instructions
06:08:58 <shachaf> What does llvm use on Windows?
06:09:10 <shachaf> I thought it had its own assembler.
06:09:16 <shachaf> It has llvm-mc which includes an assembler, right?
06:09:28 <ais523> I guess it could just use masm
06:09:41 <ais523> but shipping an assembler would also make sense
06:09:42 <shachaf> Doesn't it support inline assembly?
06:09:52 <ais523> yes but it's literally quoted into the assembler input
06:09:55 <shachaf> Which I'd expect to be portable rather than use the platform syntax.
06:12:19 <ais523> `` echo 'long foo(long x) { asm("sal %0, $1 // test" : "+r" (x)); return x }' | gcc -S -o /tmp/t.s; cat /tmp/t.s
06:12:21 <HackEso> gcc: fatal error: no input files \ compilation terminated. \ cat: /tmp/t.s: No such file or directory
06:12:30 <ais523> `` echo 'long foo(long x) { asm("sal %0, $1 // test" : "+r" (x)); return x }' | gcc -S -o /tmp/t.s -x c /dev/stdin; cat /tmp/t.s
06:12:31 <HackEso> /dev/stdin: In function ‘foo’: \ /dev/stdin:1:66: error: expected ‘;’ before ‘}’ token \ cat: /tmp/t.s: No such file or directory
06:13:02 <ais523> `` echo 'long foo(long x) { asm("sal %0, $1 // test" : "+r" (x)); return x; }' | gcc -S -o /tmp/t.s -x c /dev/stdin; cat /tmp/t.s
06:13:03 <HackEso> .file"stdin" \ .text \ .globlfoo \ .typefoo, @function \ foo: \ .LFB0: \ .cfi_startproc \ pushq%rbp \ .cfi_def_cfa_offset 16 \ .cfi_offset 6, -16 \ movq%rsp, %rbp \ .cfi_def_cfa_register 6 \ movq%rdi, -8(%rbp) \ movq-8(%rbp), %rax \ #APP \ # 1 "/dev/stdin" 1 \ sal %rax, $1 // test \ # 0 "" 2 \ #NO_APP \ movq%rax, -8(%rbp) \ movq-8(%rbp), %rax \ popq%rbp \ .cfi_def_cfa 7, 8 \ ret \ .cfi_endproc \ .LFE0: \ .size
06:13:11 <ais523> see, the comment got copied into the output file
06:13:30 <ais523> `` echo 'long foo(long x) { asm("%0!" : "+r" (x)); return x; }' | gcc -S -o /tmp/t.s -x c /dev/stdin; cat /tmp/t.s
06:13:31 <HackEso> .file"stdin" \ .text \ .globlfoo \ .typefoo, @function \ foo: \ .LFB0: \ .cfi_startproc \ pushq%rbp \ .cfi_def_cfa_offset 16 \ .cfi_offset 6, -16 \ movq%rsp, %rbp \ .cfi_def_cfa_register 6 \ movq%rdi, -8(%rbp) \ movq-8(%rbp), %rax \ #APP \ # 1 "/dev/stdin" 1 \ %rax! \ # 0 "" 2 \ #NO_APP \ movq%rax, -8(%rbp) \ movq-8(%rbp), %rax \ popq%rbp \ .cfi_def_cfa 7, 8 \ ret \ .cfi_endproc \ .LFE0: \ .sizefoo, .-foo \ .
06:13:48 <ais523> and the syntax doesn't have to make any sense
06:13:50 <shachaf> I just meant that I suspect the assembly syntax that clang lets you embed is consistent between Linux and Windows, so I doubt it just uses masm.
06:14:08 <ais523> I suspect it's different
06:14:32 <esowiki> [[Esolang:Sandbox]] https://esolangs.org/w/index.php?diff=80202&oldid=80201 * Quadril-Is * (-9) /* Something */
06:15:16 <shachaf> Besides, it supports cross-compiling, right?
06:16:28 <shachaf> `` echo 'long foo(long x) { asm(".syntax intel\nmov eax, eax" : "+r" (x)); return x; }' | gcc -c -S -o /tmp/t.s -x c /dev/stdin; cat /tmp/t.s
06:16:29 <HackEso> .file"stdin" \ .text \ .globlfoo \ .typefoo, @function \ foo: \ .LFB0: \ .cfi_startproc \ pushq%rbp \ .cfi_def_cfa_offset 16 \ .cfi_offset 6, -16 \ movq%rsp, %rbp \ .cfi_def_cfa_register 6 \ movq%rdi, -8(%rbp) \ movq-8(%rbp), %rax \ #APP \ # 1 "/dev/stdin" 1 \ .syntax intel \ mov eax, eax \ # 0 "" 2 \ #NO_APP \ movq%rax, -8(%rbp) \ movq-8(%rbp), %rax \ popq%rbp \ .cfi_def_cfa 7, 8 \ ret \ .cfi_endproc \ .LFE0: \
06:16:55 <shachaf> For some reason I thought it restored the syntax to att automatically. I guess not.
06:17:11 <ais523> shachaf: I just checked Clang's documentation about asm commands
06:17:30 <ais523> it is a literal hyperlink to gcc's documentation about asm commands, on gcc's website
06:18:15 <ais523> so I'd expect it to work the same way; if it worked differently it should at least be documented?
06:21:31 <esowiki> [[Esolang:Sandbox]] https://esolangs.org/w/index.php?diff=80203&oldid=80202 * Quadril-Is * (+5) /* Something */
06:23:27 <ais523> hmm, some searches imply that clang's inline asm always uses AT&T syntax, even on Windows, so probably it does have its own assembler
06:23:46 <esowiki> [[Esolang:Sandbox]] https://esolangs.org/w/index.php?diff=80204&oldid=80203 * Quadril-Is * (+8) /* Something */
06:24:32 <esowiki> [[Esolang:Sandbox]] https://esolangs.org/w/index.php?diff=80205&oldid=80204 * Quadril-Is * (+36) /* Something */
06:24:48 <esowiki> [[Esolang:Sandbox]] https://esolangs.org/w/index.php?diff=80206&oldid=80205 * Quadril-Is * (-30) /* Something */
06:24:49 <HackEso> /hackenv/bin/`: line 5: type: llvm-mc: not found
06:28:43 <esowiki> [[Esolang:Sandbox]] https://esolangs.org/w/index.php?diff=80207&oldid=80206 * Quadril-Is * (+169) /* Something */
06:29:47 <shachaf> It seems ridiculous to me that there's any compiler anywhere that doesn't support cross-compiling.
06:31:06 <esowiki> [[Esolang:Sandbox]] https://esolangs.org/w/index.php?diff=80208&oldid=80207 * Quadril-Is * (+149) /* Invalid links */
06:32:16 <esowiki> [[Esolang:Sandbox]] https://esolangs.org/w/index.php?diff=80209&oldid=80208 * Quadril-Is * (+6) /* Special characters */
06:32:24 <esowiki> [[Esolang:Sandbox]] https://esolangs.org/w/index.php?diff=80210&oldid=80209 * Quadril-Is * (+6) /* Special characters */
06:33:59 <ais523> shachaf: presumably, to cross-compile, any inline asm would have to be written for the target platform
06:34:35 <shachaf> (Though Windows and Linux can share x86-64 assembly.)
06:34:51 <esowiki> [[Esolang:Sandbox]] https://esolangs.org/w/index.php?diff=80211&oldid=80210 * Quadril-Is * (+20) /* Special starting things */
06:34:52 <ais523> actually, my experience is that compilers themselves normally support cross-compiling, but the toolchains surrounding them (especially the build tools) often don't
06:35:40 <ais523> Windows and Linux have different calling conventions, so you could share inline asm but only as long as it didn't call functions and wasn't a function itself
06:36:12 <shachaf> Toolchains and build tools should definitely support cross-compiling.
06:36:36 <esowiki> [[Esolang:Sandbox]] https://esolangs.org/w/index.php?diff=80212&oldid=80211 * Quadril-Is * (-25) /* Special starting things */
06:38:59 <ais523> with compilers it's even harder because you have three platforms to deal with (compiler build, compiler run = target program build, target program run)
06:39:27 <esowiki> [[Esolang:Sandbox]] https://esolangs.org/w/index.php?diff=80213&oldid=80212 * Quadril-Is * (+14) /* Special starting things */
06:39:42 <esowiki> [[Esolang:Sandbox]] https://esolangs.org/w/index.php?diff=80214&oldid=80213 * Quadril-Is * (-97) /* Bad title */
06:39:54 <ais523> I managed to get C-INTERCAL to support compiler build != compiler run (it was hard, and involves what is in effect two independent autoconf scripts)
06:40:19 <ais523> it doesn't directly support target build != target run yet, though
06:47:42 <ais523> btw, I figured out why sharing libc.so between a lot of programs is helpful: it's not specifically to save memory nowadays, but to increase the chance that commonly used bits of libc are in the L2 cache
06:47:59 <ais523> (also, the L3 cache, and main memory)
06:48:18 <ais523> so program startup is faster because it doesn't have to copy in libc from disk every time, like it would in a statically linked program
06:48:28 <shachaf> Are cache misses on libc really a significant part of the runtime of programs?
06:48:49 <ais523> it wouldn't surprise me if they were, for small programs that run quickly
06:49:09 <ais523> disk access is so much slower than just about anything else
06:49:10 <shachaf> My guess is that it's negligible for almost all programs.
06:49:16 <shachaf> But it would be interesting to measure.
06:49:43 <ais523> I wonder how you evict a particular file from memory altogether on Linux
06:49:56 <ais523> (ideally without affecting the rest of the system in the process)
06:50:06 <shachaf> I only know how to evict the entire cache.
06:51:07 <ais523> you could get the file out of L3 cache by mmapping it, faulting it in, then clflushing every cache line in it
06:51:15 <ais523> (that will also take it out of L1 and L2 caches)
06:51:28 <ais523> getting it out of main memory seems harder, though (especially as you just faulted it in there!)
06:51:46 <shachaf> Maybe if you write to it with O_DIRECT.
07:02:24 -!- Sgeo has quit (Read error: Connection reset by peer).
07:04:08 <ais523> I've been looking for a way in the Linux kernel sources, but haven't found one (that said, I'm terrible at finding anything in there)
07:49:52 -!- imode has quit (Quit: Sleep.).
07:54:09 -!- ais523 has quit (Quit: quit).
07:56:57 <esowiki> [[Truth-machine]] https://esolangs.org/w/index.php?diff=80215&oldid=80183 * Hakerh400 * (+259) +[[Trivial]]
07:58:59 <esowiki> [[Truth-machine]] M https://esolangs.org/w/index.php?diff=80216&oldid=80215 * Hakerh400 * (+25) /* Trivial */
07:59:33 <esowiki> [[Truth-machine]] M https://esolangs.org/w/index.php?diff=80217&oldid=80216 * Hakerh400 * (+1) /* Trivial */
08:38:30 -!- S_Gautam has quit (Quit: Connection closed for inactivity).
08:55:14 -!- delta23 has joined.
08:55:44 <b_jonas> "<ais523> actually, my experience is that compilers themselves normally support cross-compiling, but the toolchains surrounding them (especially the build tools) often don't" => this yes, except it's even more the system libraries than the build tools. gcc and clang in theory works fine on or for windows, but it's very hard to actually use them on windows because of lack of a good toolchain that works
08:55:50 <b_jonas> with them. and gcc/clang, for some reason, still only supports the ABI where long is 64-bit on windows, so you can't just mix and match it with native windows toolchains. it's strange, you'd think it would be trivial to add a separate mode to them where long is 32-bit long (plus implement the remaining builtin functions and pragmas for msvc compatibility), but that's not happening.
08:56:53 <b_jonas> I am sort of hoping that https://ziglang.org/ will fix this: it promises to ship a working C compiler toolchain based on clang and a custom libc to windows, but it does not, at least right now, try to ship a C++ compiler toolchain
08:58:44 <b_jonas> "I wonder how you evict a particular file from memory altogether on Linux" => perhaps with fadvise or posix_madvise, or by truncating it to zero length
08:58:59 <b_jonas> but that won't work for libc
09:04:39 -!- none30 has quit (*.net *.split).
09:04:40 -!- myname has quit (*.net *.split).
09:04:52 <b_jonas> ais523: opening the library and then posix_fadvise(.., .., .., POSIX_FADV_DONTNEED) might work, but this is only good for read-only files like a shared library, otherwise it has the side effect of possibly discarding cached writes
09:06:36 -!- none30 has joined.
09:06:36 -!- myname has joined.
09:10:00 -!- Discordian[m] has quit (Ping timeout: 244 seconds).
09:10:12 -!- wmww has quit (Ping timeout: 243 seconds).
09:10:22 -!- none30 has quit (Ping timeout: 258 seconds).
09:10:28 -!- acedic[m] has quit (Ping timeout: 265 seconds).
09:22:46 -!- sprock has quit (Ping timeout: 272 seconds).
09:37:16 -!- none30 has joined.
09:39:12 -!- Discordian[m] has joined.
09:44:34 -!- LKoen has joined.
09:45:21 -!- acedic[m] has joined.
09:55:57 -!- none30 has quit (Ping timeout: 240 seconds).
09:56:04 -!- acedic[m] has quit (Ping timeout: 240 seconds).
09:56:06 -!- Discordian[m] has quit (Ping timeout: 246 seconds).
10:11:41 -!- rain1 has joined.
10:21:20 -!- none30 has joined.
10:40:34 -!- mniip has quit (Ping timeout: 606 seconds).
10:47:58 -!- acedic[m] has joined.
10:47:58 -!- wmww has joined.
10:47:59 -!- Discordian[m] has joined.
11:11:09 -!- ArthurStrong has quit (Quit: leaving).
12:06:08 -!- delta23 has quit (Quit: Leaving).
12:15:24 <b_jonas> hey #estoeric, I have a question about Android UI since I don't generally use Android computers. you know how Android generally has three buttons at the bottom of the screen, these used to be physical buttons but these days they're just software ones? have they changed this such that the third button besides back and home screen is no longer the menu button, when did they change this, and how could they
12:15:30 <b_jonas> change something without breaking compat with all existing third party programs?
12:16:05 <LKoen> I think mostly the third party programs don't use the buttons
12:16:21 <LKoen> like, your program may provide a functionality that must be called when the user hits the button "return"
12:17:07 <LKoen> but the program itself just provide a functionality that corresponds to "return" and whether the user presses the button or returns by some other way is unknown
12:17:16 <LKoen> I meant "back" not "return"
12:17:40 <b_jonas> and does that apply to the menu button as well?
12:17:48 <LKoen> on my phone the third button is "view all opened windows"
12:17:54 <LKoen> I've never had a "menu"
12:19:10 <LKoen> but then I haven't had an android for very long. I used to have a phone with 12 buttons that could only send text messages and phone calls
12:24:20 <myname> b_jonas: do you perhaps use a modified android version provided by the manufactorer of your phone?
12:25:00 <myname> i vaguely remember switching longpress and single press on some of those buttons on some ui modifications
12:25:09 <myname> but it should be reversible via settings
12:27:41 <b_jonas> myname: my phone doesn't have any Android version thank you very much
12:46:50 <fizzie> There's three versions of Android navigation that have been in the stock AOSP builds: 3-button navigation, 2-button navigation and gesture navigation.
12:47:32 <fizzie> I don't remember exactly which version included which one, and which ones are still available. I think at least one of my phones still offers all three. (It's a configurable setting.)
12:50:06 <b_jonas> fizzie: what does 3-button navigation mean?
12:50:15 <b_jonas> so this is still configurable? ok
12:50:45 <fizzie> It's the one that has the three buttons "back", "home" and "recent apps" (which is officially called "overview", but I don't think that is such a well-known term).
12:50:56 <fizzie> I've never seen the third button to be "menu" either.
12:51:29 <fizzie> But manufacturers do tend to do all kinds of UI customizations. I think I had a test device with four buttons once.
12:53:01 <fizzie> Looking around, though, apparently they did used to have that in AOSP too, just longer ago than when I got into Android (pre-Lollipop).
12:53:53 <fizzie> As for "without breaking compat", I don't imagine they did, but it *has* been a long time now.
12:54:07 <fizzie> https://developer.android.com/guide/topics/ui/menus "Beginning with Android 3.0 (API level 11), Android-powered devices are no longer required to provide a dedicated Menu button. With this change, Android apps should migrate away from a dependence on the traditional 6-item menu panel and instead provide an app bar to present common user actions."
12:54:44 <myname> well, it can only "break compatibility" to the user, the apps aren't really aware of the navigation besides some signals
12:55:14 <fizzie> I mean, they can expect there to be a menu button and not provide any other way to launch some functionality.
12:55:23 <fizzie> So I think that'd be pretty much a breaking change.
12:55:50 <fizzie> s/can/could, back then,/
12:56:31 <myname> ah the three-dots-one, if i remember correctly, that was a per-app thing. i have no idea if it is actually removed
12:56:48 <fizzie> It wasn't a three-dots initially.
12:59:45 <fizzie> AFAICT, it was one of the three primary buttons (in the pre-ICS days), with a menu symbol. Then it got shifted to be an "overflow" three dots thing (in *addition* to the three main buttons, only shown if the app defines an options menu), and then gotten rid of completely.
13:03:11 <fizzie> https://developer.android.com/guide/topics/ui/menus#options-menu "Where the items in your options menu appear on the screen depends on the version for which you've developed your application: ..."
13:04:03 <fizzie> That makes me wonder what would happen if I could still find an app with targetSdk=10 and run it on a modern phone, would it provide some system UI affordance to show the menu.
13:06:02 <fizzie> If it does (or at least did for a while), then I guess that's the way they could make that change without breaking compatibility: by treating apps that target a version of the platform where a menu button was still mandatory differently. (If you declare targetSdk >= 11, you presumably promise it will work even without a menu button.)
13:07:50 -!- MDude has joined.
13:30:23 -!- TheLie has joined.
13:39:23 -!- arseniiv has joined.
13:53:31 -!- SpaceDecEva has joined.
13:55:14 -!- SpaceDecEva has quit (Client Quit).
13:57:26 -!- SpaceDecEva has joined.
14:23:01 -!- SpaceDecEva has quit (Quit: Connection closed).
14:30:56 -!- TheLie has quit (Remote host closed the connection).
14:36:47 -!- TheLie has joined.
14:41:30 -!- ubq323 has joined.
14:57:08 -!- ubq323 has quit (Quit: WeeChat 2.3).
14:57:24 -!- ubq323 has joined.
15:04:02 -!- Emerald has joined.
15:05:59 -!- Emerald has quit (Client Quit).
15:07:47 -!- Emerald has joined.
15:13:05 -!- Emerald has quit (Ping timeout: 248 seconds).
15:58:38 -!- naivesheep has quit (Quit: ZNC 1.8.2 - https://znc.in).
16:03:09 -!- naivesheep has joined.
16:11:00 -!- naivesheep has quit (Quit: ZNC 1.8.2 - https://znc.in).
16:11:24 -!- naivesheep has joined.
16:14:09 <esowiki> [[Trivial]] M https://esolangs.org/w/index.php?diff=80218&oldid=80198 * Hakerh400 * (+1)
16:17:49 -!- myname has quit (Quit: WeeChat 2.9).
16:18:15 -!- LKoen has quit (Remote host closed the connection).
16:18:33 -!- myname has joined.
16:21:08 <esowiki> [[User:Ivancr72]] https://esolangs.org/w/index.php?diff=80219&oldid=53157 * Ivancr72 * (-208) Replaced content with "im cringe"
16:28:22 -!- TheLie has quit (Remote host closed the connection).
16:29:04 -!- Deewiant has quit (Ping timeout: 256 seconds).
16:29:23 -!- Deewiant has joined.
16:35:50 <esowiki> [[NyaScript]] https://esolangs.org/w/index.php?diff=80220&oldid=80058 * ThatCookie * (+272) Added Variables
16:42:23 -!- Sgeo has joined.
16:56:56 -!- Lord_of_Life_ has joined.
16:59:17 -!- Lord_of_Life has quit (Ping timeout: 265 seconds).
16:59:17 -!- Lord_of_Life_ has changed nick to Lord_of_Life.
17:18:42 -!- TheLie has joined.
17:19:35 -!- LKoen has joined.
18:24:48 -!- ubq323 has quit (Ping timeout: 260 seconds).
18:42:43 -!- arseniiv has quit (Ping timeout: 264 seconds).
19:08:38 -!- ubq323 has joined.
19:10:28 -!- LKoen has quit (Remote host closed the connection).
19:19:15 -!- SpaceDecEva has joined.
19:20:44 -!- SpaceDecEva has quit (Client Quit).
19:21:11 -!- LKoen has joined.
19:33:54 -!- essays has joined.
19:39:39 -!- arseniiv has joined.
19:49:11 -!- ubq323 has quit (Ping timeout: 265 seconds).
19:53:51 -!- arseniiv has quit (Ping timeout: 256 seconds).
20:05:19 <esowiki> [[Parse this sic]] https://esolangs.org/w/index.php?diff=80221&oldid=80193 * Digital Hunter * (+1054) /* 99 bottles of beer */
20:31:11 <esowiki> [[Parse this sic/Numbers]] N https://esolangs.org/w/index.php?oldid=80222 * Digital Hunter * (+5576) Hi, if it's not my place to create such a page let me know and I'll revert it! Or you can just delete it. I'm not sure quite what sort of category belongs here; the Underload page has Programming techniques but I don't feel that's appropriate here.
20:33:47 <esowiki> [[Parse this sic/Numbers]] M https://esolangs.org/w/index.php?diff=80223&oldid=80222 * Digital Hunter * (+3)
20:34:26 <esowiki> [[Parse this sic]] https://esolangs.org/w/index.php?diff=80224&oldid=80221 * Digital Hunter * (+315) /* Numbers */
20:34:51 <esowiki> [[Parse this sic]] M https://esolangs.org/w/index.php?diff=80225&oldid=80224 * Digital Hunter * (-38) /* Info to come */ The list of numbers has arrived!
20:44:37 -!- sprock has joined.
20:47:26 <esowiki> [[Talk:NyaScript]] N https://esolangs.org/w/index.php?oldid=80226 * PythonshellDebugwindow * (+347) /* Undocumented behaviour */ new section
21:06:13 -!- mmmattyx has joined.
21:06:37 -!- ubq323 has joined.
21:13:41 -!- diverger has joined.
21:21:13 <esowiki> [[User:The-Ennemy/asm2bf-tutorial]] N https://esolangs.org/w/index.php?oldid=80227 * The-Ennemy * (+286) Created page with " ==About this tutorial== ==About Brainfuck== ==About asm2bf== ==Installing and "Hello World!"== ==Basic concepts== ==Conditionals== ==Memory model: taperam and stack==..."
21:30:52 <esowiki> [[User:The-Ennemy/asm2bf-tutorial]] https://esolangs.org/w/index.php?diff=80228&oldid=80227 * The-Ennemy * (+391) /* About this tutorial */
21:41:33 <esowiki> [[User:The-Ennemy/asm2bf-tutorial]] https://esolangs.org/w/index.php?diff=80229&oldid=80228 * The-Ennemy * (+147) /* About this tutorial */
21:44:29 <esowiki> [[User:The-Ennemy/asm2bf-tutorial]] https://esolangs.org/w/index.php?diff=80230&oldid=80229 * The-Ennemy * (+2) /* Stack access */
21:46:15 <esowiki> [[User:The-Ennemy/asm2bf-tutorial]] https://esolangs.org/w/index.php?diff=80231&oldid=80230 * The-Ennemy * (+179) /* About this tutorial */
22:12:27 <esowiki> [[Parse this sic]] https://esolangs.org/w/index.php?diff=80232&oldid=80225 * Digital Hunter * (+0) /* Numbers */ my base conversion was bugged! Surprisingly not a PTS mistake, but one in understanding how concatenation works
22:13:12 <esowiki> [[Parse this sic]] M https://esolangs.org/w/index.php?diff=80233&oldid=80232 * Digital Hunter * (+0) /* 99 bottles of beer */ updated to my realisation of the base conversion macro bug
22:19:22 <esowiki> [[User:The-Ennemy/asm2bf-tutorial]] https://esolangs.org/w/index.php?diff=80234&oldid=80231 * The-Ennemy * (+1370) /* Installing and "Hello World!" */
22:29:42 <esowiki> [[Deadfish]] https://esolangs.org/w/index.php?diff=80235&oldid=79871 * Digital Hunter * (+638) /* Implementations */ Added an entry for Parse this sic.
22:30:39 <esowiki> [[Deadfish]] M https://esolangs.org/w/index.php?diff=80236&oldid=80235 * Digital Hunter * (+27) /* Parse this sic */
22:32:35 <esowiki> [[User:The-Ennemy/asm2bf-tutorial]] https://esolangs.org/w/index.php?diff=80237&oldid=80234 * The-Ennemy * (+1017) /* Installing and "Hello World!" */
22:39:49 <esowiki> [[User:The-Ennemy/asm2bf-tutorial]] https://esolangs.org/w/index.php?diff=80238&oldid=80237 * The-Ennemy * (+304) /* Installing and "Hello World!" */
22:54:44 -!- TheLie has quit (Remote host closed the connection).
23:08:04 -!- LKoen has quit (Remote host closed the connection).
23:17:00 -!- zzo38 has joined.
23:24:15 -!- LKoen has joined.
23:27:17 -!- LKoen has quit (Client Quit).
23:48:25 <esowiki> [[User:The-Ennemy/asm2bf-tutorial]] https://esolangs.org/w/index.php?diff=80239&oldid=80238 * The-Ennemy * (+1178)
23:50:27 -!- ArthurStrong has joined.
23:52:03 <esowiki> [[User:The-Ennemy/asm2bf-tutorial]] https://esolangs.org/w/index.php?diff=80240&oldid=80239 * The-Ennemy * (+189) /* Basic concepts */
23:55:57 -!- mmmattyx has quit (Quit: Connection closed for inactivity).