00:23:07 Are pointers signed or unsigned? 00:23:42 they could be 00:24:42 How should I think of them? 00:24:50 what does it actually mean though? when do you have sign/zero-extension of pointers where you could tell the difference? 00:26:00 I do like to think that x86-64 has signed pointers the way they're usually used (with kernel space in negative addresses) 00:29:43 Right, that's the sort of thing I was thinking. 00:29:49 Maybe it makes no difference. 00:30:27 -!- imode has quit (Ping timeout: 265 seconds). 00:32:20 . o O ( the difference is negative ) 00:33:10 shachaf: signs are pointers, so pointers should be signed, is that what you mean? 00:33:42 Exactly. 00:33:52 But then what are cosigns? 00:34:26 They are orthogonal to signs. 00:34:56 17:76 06 jbe 1f 00:34:59 So confusil. 00:35:26 I just learned about gas "1f" label syntax a few days ago, and I kept thinking it was a 0x1f offset. 00:35:34 This time it actually is a 0x1f offset! 00:35:43 :) 00:36:17 (To be fair, this is objdump output, so it wouldn't use the 1f syntax.) 00:37:18 To this day I find it confusing that the offset of relative jumps is relative to the address following the current instruction. 00:37:37 0x19 + 0x06 = 0x1f 00:37:55 Yes. 00:38:20 In particular I was trying to figure out a jump target in gdb a few days and I computed it relative to $rip without thinking. 00:38:42 No, not a jump target, rip-relative addressing. 00:39:01 It makes sense, of course (the instruction has been decoded, and correspondingly, the IP advanced, when the jump happens) 00:39:27 (thinking in terms of *very* old processors like 8086) 00:40:29 Sure. 00:41:54 Is rip-relative addressing the same way? I guess it must be but I've already forgotten. 00:42:33 yes it is. 00:43:02 Yep, I just checked. 00:43:23 I should know this since I implemented most of the addressing modes recently. 00:43:54 Though not some of the weird ones like 64-bit (%eax). 00:43:57 Does anyone use that? 00:44:20 Actually I think so. 00:44:29 `asm addr32; mov (%rax),%rdi 00:44:30 0: 67 48 8b 38 mov (%eax),%rdi 00:44:45 -!- imode has joined. 00:44:58 Having 32bit pointers is still attractive to conserve memory. 00:45:23 But you can use them with 64-bit registers, can't you? 00:45:43 Yes. 00:45:44 since you get zero-extension for most operations, you can usually just use %rax with a 32-bit address and save a byte 00:45:51 At least if you write something like mov foo, %eax; mov (%rax), bar 00:46:24 I wonder, is %eax sign-extended when you use (%eax)? 00:46:27 I imagine not. 00:46:45 But I can make up reasons... getting proper overflow behavior for (%eax + 4*%esi + 0xbase).... 00:47:17 but what would use something like that in 64-bit code? 00:47:55 When do you want overflow behavior for addresses? 00:48:07 olsner: I *would* hope that this is a purely theoretical reason :) 00:48:17 but an interesting side-effect if the address size affects all of the address calculation or just the size of the input registers 00:49:15 Clearly I expect that it affects the whole computation. 00:49:28 `asm mov (%eax,%edx), %edi 00:49:29 0: 67 8b 3c 10 mov (%eax,%edx,1),%edi 00:49:36 `asm mov (%rax,%edx), %edi 00:49:37 ​/tmp/asm.s: Assembler messages: \ /tmp/asm.s:1: Error: `(%rax,%edx)' is not a valid base/index expression \ /tmp/asm.s: Assembler messages: \ /tmp/asm.s:1: Error: missing ')' \ /tmp/asm.s:1: Error: junk `)' after expression 00:49:50 `asm addr32; mov (%rax,%rdx), %edi 00:49:51 0: 67 8b 3c 10 mov (%eax,%edx,1),%edi 00:50:23 `asm addr32; gs; mov (%rax,%rdx), %edi 00:50:24 0: 67 65 8b 3c 10 mov %gs:(%eax,%edx,1),%edi 00:50:25 `asm gs; addr32; mov (%rax,%rdx), %edi 00:50:26 0: 65 67 8b 3c 10 mov %gs:(%eax,%edx,1),%edi 00:50:35 Just write the prefixes in any order you like. So convenient. 00:50:50 `asm data16; mov (%rax,%rdx), %edi 00:50:51 0: 66 8b 3c 10 mov (%rax,%rdx,1),%di 00:51:05 Golly. 00:51:30 `asm movq (%rax), %xmm0 00:51:31 0: f3 0f 7e 00 movq (%rax),%xmm0 00:51:36 `asm movq (%eax), %xmm0 00:51:37 0: 67 f3 0f 7e 00 movq (%eax),%xmm0 00:53:11 oh right, that was the gas syntax for these funny addressing modes 00:53:38 offset(%base,%index,multiplier) 00:54:36 possible dumb reason: someone planned/built a 32-bit x86 emulator (before compatibility mode was invented?) and convinced AMD to provide support for extra-stupid JIT compilers that just add prefixes to specific instructions 00:55:35 olsner: sorry, I lost track... reason for what? 00:55:42 for having the 32-bit override 00:55:57 ah. 00:56:11 plausible enough 00:57:11 I also bet this was rather cheap to support. 00:58:16 In context... which is a CPU that supports real mode (which has 32 bit addressing mode via the address size prefix) and 32 bit mode support for legacy software. 01:01:07 [[Esolang:Introduce yourself]] https://esolangs.org/w/index.php?diff=66918&oldid=66897 * DmilkaSTD * (+179) /* Introductions */ 01:02:08 [[Esomachine]] N https://esolangs.org/w/index.php?oldid=66919 * DmilkaSTD * (+3608) Created page with "Esomachine was made by [https://esolangs.org/wiki/User:DmilkaSTD DmilkaSTD]. Imagine we have an array with infinite length. When it starts every array index is locked (If an..." 01:09:19 congratulations, schlock. you might get to save the galaxy single-handed... 01:17:38 -!- arseniiv has quit (Ping timeout: 246 seconds). 01:24:17 -!- oerjan has quit (Quit: Nite). 01:25:33 -!- imode has quit (Ping timeout: 265 seconds). 01:27:05 -!- imode has joined. 01:29:53 What other bizarro addressing modes are there in amd64? 01:30:21 Also did I link this tcc SSE bug I found? https://lists.nongnu.org/archive/html/tinycc-devel/2019-10/msg00033.html 01:30:29 It was somewhat annoying to track down. 02:08:38 what bizarro mode are you talking about 02:08:58 the base + mult*index + offset mode is pretty reasonable, aside from the gas syntax for it 02:09:44 with Intel syntax it'd be like MOV EDI, DWORD PTR [4*EAX + EDX + 7] 02:09:52 or what have you 02:09:53 Sure, but you have addr32, fs/gs, rip-relative, all sorts of things. 02:10:50 Presumably there are some things I don't know about. 02:11:09 Also there are all the little details, which I think I got right? 02:11:36 `asm lea (%r11), %rax 02:11:37 0: 49 8d 03 lea (%r11),%rax 02:11:38 `asm lea (%r12), %rax 02:11:39 0: 49 8d 04 24 lea (%r12),%rax 02:11:57 -!- imode has quit (Ping timeout: 265 seconds). 02:12:03 gotta include that sib byte for r12 02:12:46 Of course I haven't done SSE/AVX/whatever at all, or the VEX prefix, or anything like that. 02:14:25 so much nonsense 02:15:02 what instruction encoding are you into 02:15:55 It doesn't even have a bit-reversed addressing mode. 02:16:30 What's that? 02:16:40 It's a thing DSPs have, for speeding up FFTs. 02:16:59 The TI TMS320C54x at least has it. 02:17:00 `asm lea (%r13), %rax 02:17:01 0: 49 8d 45 00 lea 0x0(%r13),%rax 02:17:10 Right, r13 has a special case too. 02:17:21 oh? 02:17:35 But I think that one is modrm+offset rather than modrm+SIB. 02:17:44 fizzie: That sounds pretty fancy. I should learn about fancy DSP things. 02:17:59 They also have circular addressing modes. 02:18:06 For FIR filters and suchlike. 02:18:47 Speaking of circles, what's the nicest way to write a circular buffer? 02:18:56 I don't like having a boolean to distinguish empty from full. 02:19:30 You can go with head + length instead of head + tail. 02:19:43 Then you have 0 and N for empty and full. 02:20:14 Hmm, I guess. 02:20:23 There's also that one fancy thing that I think Chrome used somewhere, or someone used somewhere. 02:20:25 What about the case where you have a separate reader and writer? 02:20:33 I know of some other tricks: 02:20:48 Map two copies of the same buffer in adjacent address space, so you get a contiguous buffer. 02:20:52 The Bip-Buffer, that's what I was thinking of. 02:21:01 The Bip-Buffer doesn't need the mapping trick. 02:21:29 (On the other hand, it may waste some space.) 02:22:30 Another trick I heard about is, instead of keeping the read/written size mod the buffer size, keep the total size, and mask it at use time. 02:22:40 bip booper 02:24:47 I can't find any reference to anyone actually using the bip-buffer, just a few random implementations, so maybe I imagined that. 02:25:06 I'm reading about it now. 02:25:08 spsc-bip-buffer is "#108 in Concurrency" on lib.rs, which sounds like a TripAdvisor ranking. 02:26:44 This explanation doesn't seem very clear. 02:28:28 What's the benefit of this? 02:28:49 what is lib.rs 02:28:59 Is it that writes are always contiguous (but reads might not be)? 02:32:14 -!- imode has joined. 02:42:13 AIUI, the reads are contiguous too. 02:43:39 Maybe I don't understand the diagram in https://www.codeproject.com/Articles/3479/The-Bip-Buffer-The-Circular-Buffer-with-a-Twist 02:44:06 What happens in 5? From their description it looks like both A and B contain data. 02:44:35 Right, reads of multiple writes are not necessarily contiguous. 02:45:01 Maybe. 02:45:13 Hmm. 02:45:31 I guess the idea is that a library might want to write a fixed-size thing and you want to make sure to be able to fit it in the buffer? 02:45:44 And another library can also interpret that fixed-size thing since it's contiguous. 02:46:33 (Or, y'know, non-fixed-size.) 02:46:37 Or, no, maybe reads of any size can be contiguous too, it's just that in stage 5 if you wanted to read more than the orange bit some copying would be involved. 02:46:51 ...or maybe not. 02:47:31 As in copying all the data in the buffer? 02:47:37 Yeah, I was looking at the API, for reading you just ask the thing for the largest contiguous block. 02:47:38 Regular circular buffers have this property too. 02:47:44 Which API? 02:48:05 Well, the BipBuffer class described there. 02:48:20 But I guess it's still useful, if you (say) put length-delimited protos there. 02:48:29 Oh, the one on that page. 02:48:51 I guess that's true? 02:48:56 As long as you write the thing into one contiguous (reserved) block, the reading side can also read it as one contiguous block. 02:49:09 Right. 02:49:24 Maybe it would be better for APIs to support reading and writing in multiple chunks. 02:49:45 Maybe. 02:49:45 I guess there's some concern that the API will want to keep an internal buffer and do some copying in order to support that. 02:50:28 I think the mmap solution is better if you want things to be contiguous. 03:04:50 -!- imode has quit (Ping timeout: 240 seconds). 03:22:28 -!- imode has joined. 03:28:46 @metar lowi 03:28:47 LOWI 020320Z AUTO 27011KT 9000 FEW001 BKN002 08/07 Q1006 03:31:23 @metar koak 03:31:23 KOAK 020253Z 00000KT 10SM CLR 13/01 A3011 RMK AO2 SLP194 T01330011 53004 04:17:47 @metar ksea 04:17:47 KSEA 020353Z 01013KT 10SM SCT200 09/03 A3037 RMK AO2 SLP293 T00890033 04:27:11 -!- hppavilion[1] has joined. 04:32:45 using the thought I had earlier, you can build interesting data pipelines. 04:33:14 sum $1234 bitvector 04:33:45 or sum bitvector $1234 number 04:34:30 because you push a handle to the concurrent process to the queue, any further processes can be constructed, passed that handle, and form a linear dataflow graph. 04:35:45 a bidirectional one as well. `number` takes a number and a process to send that value to. `bitvector` takes a process, receives a number and sends the bits of that number to the taken process. `sum` takes a process, receives a number and keeps a running tally of that number which is available on request. 04:35:55 you can do lazy evaluation with that. 04:37:04 you can design a process that takes two handles, receives something and broadcasts it to the two processes it has handles to. 05:01:36 https://hatebin.com/shiyqdhisf not bad. 05:04:17 bitvector's logic is wrong, it should send zero on completion. 05:07:42 https://hatebin.com/lriwwfiijo that's better. 05:09:16 I feel like you can get pretty granular with this. 05:48:54 -!- ArthurStrong has quit (Quit: leaving). 05:59:23 -!- imode has quit (Ping timeout: 276 seconds). 06:00:46 -!- tromp_ has joined. 06:03:14 -!- tromp has quit (Ping timeout: 246 seconds). 06:09:26 -!- imode has joined. 06:14:13 [[Metatape]] https://esolangs.org/w/index.php?diff=66920&oldid=53872 * HactarCE * (+4400) Overhauled Metatape according to 2019 edition 07:12:11 -!- imode has quit (Ping timeout: 276 seconds). 08:26:05 -!- Phantom_Hoover has joined. 08:53:09 -!- kspalaiologos has joined. 09:01:17 -!- Phantom_Hoover has quit (Ping timeout: 240 seconds). 10:45:32 [[Esomachine]] https://esolangs.org/w/index.php?diff=66921&oldid=66919 * DmilkaSTD * (+0) 10:46:46 [[Esomachine]] https://esolangs.org/w/index.php?diff=66922&oldid=66921 * DmilkaSTD * (+15) 10:47:25 [[Esomachine]] https://esolangs.org/w/index.php?diff=66923&oldid=66922 * DmilkaSTD * (+10) 10:47:52 [[Esomachine]] https://esolangs.org/w/index.php?diff=66924&oldid=66923 * DmilkaSTD * (-7) 10:48:21 [[Esomachine]] https://esolangs.org/w/index.php?diff=66925&oldid=66924 * DmilkaSTD * (+10) 10:56:03 -!- tromp has joined. 10:57:33 [[Esomachine]] https://esolangs.org/w/index.php?diff=66926&oldid=66925 * DmilkaSTD * (+156) 10:58:59 -!- tromp_ has quit (Ping timeout: 246 seconds). 11:17:39 -!- hppavilion[1] has quit (Remote host closed the connection). 11:24:09 has someone taken up on esoshell project? 11:36:37 -!- arseniiv has joined. 13:10:39 -!- kspalaiologos has quit (Quit: Leaving). 13:19:01 -!- b_jonas has joined. 13:24:21 kspalaiologos: I beg to differ, but I can write usable parsers from scratch. just don't look at my ancient psz interpreter. that was long ago, and I've matured since. 13:26:13 " I shouldn't have slept at math lessons" => meh, it's quite possible that many of your lessons were a waste of time. get some good books and learn from them instead. 13:27:34 " Are pointers signed or unsigned?" => I don't think that distinction makes sense there. you don't high-multiply pointers, or compare pointers from two different arrays 13:29:13 but if I have to choose, they're probably signed on x86_64 (because the top bits are usually the same unless you have a future cpu with a 2**64 bit long address space), unsigned on x86_16 (because they are mapped into x86_32's address space by zero filling), 13:32:01 [[Kill]] N https://esolangs.org/w/index.php?oldid=66927 * CMinusMinus * (+723) Created page with "'''Kill''' is a one-word, Python-interpreted, joke programming language created by [[User:CMinusMinus]]. The sole purpose of this language, is to delete the code. The only leg..." 13:35:27 shikhin: for x86_32 though, signed vs unsigned does make a difference, and I don't know which one is used. either look it up in the ELF ABI docs, or allocate a 2.5 GB sized array (for which you need either an x86_64 kernel, or an x86_32 kernel configured to the slower 3GB+1GB address space split rather than the default 2GB+2GB split) and see how it's layed out and how pointers in it compare 13:35:33 argh 13:35:39 s/shikhin/shachaf/ 13:35:49 I suck at autocompletion 13:36:13 [[Kill]] https://esolangs.org/w/index.php?diff=66928&oldid=66927 * CMinusMinus * (+102) 13:36:32 oerjan: ^ 13:37:17 " To this day I find it confusing that the offset of relative jumps is relative to the address following the current instruction." => I find that one natural, and the other convention (which some cpu archs use) unnatural 13:37:42 [[Kill]] https://esolangs.org/w/index.php?diff=66929&oldid=66928 * CMinusMinus * (+22) 13:40:32 [[Language list]] https://esolangs.org/w/index.php?diff=66930&oldid=66864 * CMinusMinus * (+11) Added "Kill" Language 13:40:57 [[Kill]] https://esolangs.org/w/index.php?diff=66931&oldid=66929 * CMinusMinus * (+2) 13:42:57 [[Kill]] https://esolangs.org/w/index.php?diff=66932&oldid=66931 * CMinusMinus * (+3) 13:43:18 " Maybe it would be better for APIs to support reading and writing in multiple chunks." => they already do, if you mean multiple chunks in memory assembled to a single chunk in the file descriptor or back, with preadv/pwritev, plus the aio api eg. aio_write is parametrized like that by default (I wanted to say "works like that by default" but it's probably not correct to use "works" for the 13:43:24 linux aio api at all) 13:43:55 hmm no, I remembered wrong, aio_write doesn't use preadv-style scatter-gather addressing 13:44:10 [[User:CMinusMinus]] https://esolangs.org/w/index.php?diff=66933&oldid=66903 * CMinusMinus * (+27) 13:44:10 I'm not talking about OS APIs, which support this already, but other APIs. 13:44:12 what api was it than otehr tahn preadv/pwritev, I wonder? I'm sure there was another 13:44:24 Presumably that's also what fizzie is talking about also. 13:44:38 Just some arbitrary function in your code like parse_thing() that takes a buffer and a length. 13:44:43 what other APIs then? 13:44:47 ah 13:45:25 shachaf: I think https://laurikari.net/tre/ allows you match a regex to a string that is not continuous, and even from a string that's read lazily 13:45:56 OK, but regex matching is one special-case API which is already naturally written as a state machine anyway. 13:46:17 but of course continuous buffers have a lot of advantage 13:46:20 easier to optimize 13:46:42 I worked with bitmap images at my previous job, and I wouldn't like a non-continuous bitmap image 13:49:01 if I was given one, I'd just copy it into a proper continuous buffer (that is also aligned so that its rows are padded to a size that is 64 bytes long modulo 128 bytes; possibly padded a little at the beginning and end so I can read past the ends; and with the color channels either together and padded as if you had four channels if the input has three, or separately each one in a layer, depending on 13:49:07 what I want to do with the image) 13:57:06 Of course APIs can do that, and keep their own buffers. 13:57:29 But then you have a bunch of different buffers all the over the system, which doesn't seem that nice. 14:10:23 shachaf: no, in my experience once you have a continuous buffers, I could use them with multiple apis in place for image processing 14:10:50 there are subtleties about pixel formats, but in practice most of the time I didn't have to do unnecessary copies 14:11:49 OK, but maybe you have one buffer for parsing an HTTP request, and then another buffer for parsing the image it contains, or whatever. 14:12:10 Presumably you want to avoid a bunch of copies if you can. 14:12:25 shachaf: the HTTP buffer has compressed images 14:12:40 Sure, another buffer for decompression. 14:12:46 I have to decode those to raw uncompressed anyway if I want to work with 14:12:58 I'm describing the kind of thing you might want to avoid. 14:13:06 Can your decompression algorithm operate directly on the circular buffer? 14:13:06 but in practice when I get an image from HTTP, I save it for multiple uses rather than process directly 14:14:10 shachaf: hmm, I don't know the details, I usually decompressed images with either ImageMagick or ffmpeg, and read them from a regular file 14:16:01 shachaf: for decoding video, I did store the uncompressed frames sparsely, so each frame can be anywhere in memory and they can be reused as a circular buffer 14:20:00 shachaf: when the video is read from network directly, rather than local file, then ffmpeg does the reading, so I don't know what kind of buffer it uses 14:20:50 admittedly I used ffmpeg as a separate process, so there are two copies of the uncompressed raw data there 14:20:58 so I guess I was wrong above 14:21:09 three copies if I want a planewise format 14:45:18 Running a separate process for video decoding is obviously not reasonable for any kind of special-purpose application. 14:56:54 -!- Phantom_Hoover has joined. 15:36:24 -!- kspalaiologos has joined. 16:19:45 . o O ( Prove or disprove: There is a POSIX extended regular expression of length shorter than 10000 that accepts the multiples of 7 in decimal, with leading zeros allowed. ) 16:20:31 *Main> length rex ==> 10791 16:21:52 Which doesn't include the anchors ^( and )$, so 10795 is where I'm really at. 16:23:50 Make that 10793 (the parentheses are not required). Oh and I'm excluding the empty string but as far as I can tell this doesn't affect the length anyway; it's a matter of using + or * in one place. 16:32:46 -!- Phantom_Hoover has quit (Ping timeout: 265 seconds). 16:45:46 -!- Phantom_Hoover has joined. 16:51:22 -!- xkapastel has joined. 16:59:32 -!- lldd_ has joined. 17:26:20 -!- imode has joined. 17:32:33 -!- imode has quit (Quit: WeeChat 2.6). 17:33:32 -!- imode has joined. 17:41:43 -!- lldd_ has quit (Quit: Leaving). 19:04:36 -!- Phantom_Hoover has quit (Ping timeout: 240 seconds). 19:15:25 -!- imode has quit (Ping timeout: 268 seconds). 19:16:58 -!- Cale has quit (Ping timeout: 245 seconds). 19:29:06 -!- kspalaiologos has quit (Quit: Leaving). 19:29:12 -!- Cale has joined. 19:35:22 -!- imode has joined. 19:53:29 int-e: eww. 19:54:03 int-e: also isn't it ^[[:space:]][-+]( )$ 19:54:09 no wait 19:54:17 ^[[:space:]][-+]?( )$ 19:59:03 -!- Phantom_Hoover has joined. 19:59:49 -!- imode has quit (Ping timeout: 268 seconds). 20:00:54 -!- imode has joined. 20:08:05 b_jonas: nah, no signs 20:14:36 int-e: if i want to be picky, i'd say .* doas accept the multiples of 7 20:16:43 myname: yeah but you know what I meant anyway 20:17:34 Also obviously the right way to write such a regular expression is to not do it. :P 20:17:49 (But the second best way is to write a program to do it for you.) 20:38:53 int-e: yeah, there are programs that can automatically convert a nondet finite automaton to a regex, even with the blowup 20:39:16 I know of one 20:39:30 but there are probably more because it's a known algorithm 20:40:08 -!- imode has quit (Ping timeout: 276 seconds). 20:40:26 sure 20:41:08 -!- xkapastel has quit (Quit: Connection closed for inactivity). 20:41:31 But do they also try to optimize the result size... 20:44:40 int-e: obviously the regex would be shorter in perl regex syntax, where you can use the "recursion" feature, not to build recursive regex, but to reuse longer regex multiple times 20:44:59 yeah that would definitely help 20:52:31 hm, surprisingly large blowup from such a reasonably sized state machine 20:52:57 -!- dingwat has quit (Quit: Connection closed for inactivity). 20:55:19 it's easily O(3^n) where n is the number of states 20:55:58 -!- oerjan has joined. 20:56:47 So... let me try... 5 states (remainders 0..4 only): 689; 6 states: 2701; 7 states: 10793 20:57:31 That really looks a bit worse than O(3^n). But of course the number of states is still small. 20:57:44 But wait. O(4^n) actually makes more sense. 20:57:57 And it looks pretty close to that. 20:58:04 Hi oerjan. 20:59:11 hi int-e 20:59:56 -!- MDude has joined. 21:04:49 But eh. My (fairly primitive) code is here: http://paste.debian.net/1113236/ ... it's optimizing, including a small peephole optimization (intelligently choosing between [07] and 0|7 depending on context), but fundamentally the question is whether there is a better way to convert a DFA (which happens to be a minimal NFA for the purpose) to a regexp than removing states one by one. 21:06:19 And I just don't know the answer to that question. 21:06:38 I tried a bit with https://github.com/qntm/greenery, it seems to always produce a regexp that converts back to the same DFA (which I suspect is not optimal for making a short regexp) 21:07:43 Well this is inherently a DFA... you have 7 remainders to keep track of, so that's a minimum of 7 states, and if you use 7 states then you'll be dealing with a DFA. 21:08:08 oerjan: ^ <-- i have no idea why you pinged me there 21:08:22 unless it was to joke about autocompletion 21:08:38 (in which case you need to work on your jokes) 21:09:17 Maybe b_jonas wanted to highlight me. Which would've been appropriate. :) 21:09:23 heh 21:21:15 -!- imode has joined. 21:21:15 what books on numeric recipes related to floating-point (or esp. IEEE 754) issues could you recommend? With recipes for inverse hyperbolic functions or e. g. if there is a sense to define `coshm1(x) := 0.5 * (expm1(x) + expm1(-x))` or one should just use plain `cosh(x) - 1` 21:27:23 fun question.... 21:28:02 -!- heroux has quit (Ping timeout: 240 seconds). 21:28:03 oerjan: sorry, that should have highlighted olsner 21:28:39 fizzie: the https://esolangs.org/logs/all.html website seems to be down 21:28:57 I mean, cosh(x) - 1 suffers from terrible cancellation around 0, but 0.5 * (expm1(x) + expm1(-x)) still suffers from cancellation (expm1(x) = x + x^2/2 + O(x^3), expm1(-x) = -x + x^2/2 + O(x^3), cosm1(x) = x^2/2 + O(x^3)...) 21:30:35 So exp1m(log1p(sinh(x)**2)/2) may be better. 21:31:06 arseniiv: the fourth edition of Knuth volume 2, only it's not yet written 21:31:07 Modulo function names. 21:31:49 b_jonas: :( 21:32:31 int-e: ah, I suspected my definition would have a flaw 21:32:51 ah sorry, that will be third edition 21:32:54 no wait 21:32:56 fourth edition 21:33:06 anyway, until that time, you can look at the existing third edition 21:33:31 it doesn't talk about IEEE 754, but it does talk about floating point in general 21:34:03 -!- atslash has quit (Quit: This computer has gone to sleep). 21:34:07 MIX uses a different floating point format that shifts by mix bytes, rather than bits, but the main text considers other bases too, including base 2 21:35:44 what the current edition doesn't consider is features specific to IEEE 754, which are infinities and NaNs 21:36:23 b_jonas: I'm not sure what's up with it, my monitoring has been saying every now and then that it's down for a bit. 21:36:27 Working for me now. 21:36:35 b_jonas: mix bytes => wait, there are its own bytes? How many bits? 21:36:49 arseniiv: either six bits, or two decimal digits 21:37:28 arseniiv: technically the book says the byte has a range from 0 to a maximum that is between 63 and 99 inclusive, so a binary MIX goes up to 63, a decimal up to 99, a ternary up to 81 21:37:42 arseniiv: see our wiki article 21:37:51 (and the book itself) 21:38:30 I wonder if MIX-related issues don’t make the text obscurer 21:39:17 yeah, I was to look for searching if I have it somewhere 21:39:22 don’t remember 21:40:27 have what? 21:40:30 the books? 21:43:22 -!- OugiOshino has changed nick to BWBellairs. 21:44:08 b_jonas: hm I don’t seem to find there much of the redundant recipes I was to look for 21:45:11 fizzie: yes, it's up now 21:45:13 b_jonas: yeah, it seems I have that volume here, but the contents page doesn’t look too promising 21:45:21 -!- heroux has joined. 21:46:08 I mean, for basics I have that “What every computer scientist should know about FP arithmetic” article reprint-as-an-appendix-from-some-Sun-manual 21:46:40 texlive's documentation packages are ridiculously big 21:47:45 but the careful examination of numeric issues by myself seems unnecessary if… hm I wonder if I should look at Numpy code 21:49:00 arseniiv: TAOCP vol 2 almost certainly isn't enough for what you asked, 21:49:08 but I'm not familiar with other books to recommend 21:49:25 I haven't read many such books really 21:49:45 -!- heroux has quit (Read error: Connection reset by peer). 21:49:46 I'm aware that there *are* numerical recipe books... 21:50:11 -!- heroux has joined. 21:50:41 b_jonas: ah, OK 21:51:05 int-e: yeah, they just seem elusive 21:51:21 there's Stoyan Gisbert's numerical analysis textbook, which is freely available online, but I think only exists in Hungarian. I don't know if there's any translation 21:51:57 @where floating-point 21:51:57 "What Every Programmer Should Know About Floating-Point Arithmetic" at and "What Every Computer Scientist Should Know About Floating-Point Arithmetic" by David 21:51:57 Goldberg in 1991 at and 21:52:01 b_jonas: finally a good excuse to learn hungarian? 21:52:01 it has three volumes, the first one is an introduction one that goes pretty far, and then the second and third are about solving partial differential equations numerically 21:52:21 therea are certainly more good books, I'm just not familiar with them 21:52:57 for the logs, IIRC Stoyan Gisbert's book is available somewhere from http://www.tankonyvtar.hu/hu/bongeszes , but that server is down right now 21:53:31 it says that it's down until 2019-11-03 though, so unless you see the date autoincrement, it'll hopefully come up later 21:53:39 and then the second and third are about solving partial differential equations numerically => (aaaah!! you know, this is the night here, how would I go to sleep now) 21:54:19 (I’m afraid of numeric PDEs after my naive Shrödinger model blown up) 21:54:44 arseniiv: right, the whole thing is so tricky that it's no wonder you need two volumes on it 21:54:50 pfft. 21:54:51 -!- heroux has quit (Read error: Connection reset by peer). 21:54:56 @where ffi 21:54:56 http://www.cse.unsw.edu.au/~chak/haskell/ffi/ 21:55:00 I think the first volume covers ODEs and numerical integration 21:55:10 int-e: lol 21:55:22 -!- heroux has joined. 21:55:29 dead link tjhough 21:56:32 int-e: https://www.haskell.org/onlinereport/haskell2010/haskellch8.html#x15-1490008 is probably the current one 21:56:44 it's integrated to the main standard from the separate tech report 21:56:57 @where+ ffi http://www.haskell.org/onlinereport/haskell2010/haskellch8.html 21:56:57 Nice! 21:57:29 @hwere ffi 21:57:29 http://www.haskell.org/onlinereport/haskell2010/haskellch8.html 21:57:47 yeah that's what I copied 21:59:33 that's not standalone though, you need most of chapters 24 to 37 inclusive 21:59:38 @ʍere ffi -- just testing 21:59:38 http://www.haskell.org/onlinereport/haskell2010/haskellch8.html 21:59:40 :o 21:59:42 which have the relevant Foreign modules 22:00:26 -!- unlimiter has joined. 22:00:32 eg. https://www.haskell.org/onlinereport/haskell2010/haskellch28.html#x36-27400028 defines the Foreign.C.CLong type 22:03:18 so perhaps https://www.haskell.org/onlinereport/haskell2010/haskell.html#QQ2-15-159 would be a better link 22:03:24 int-e: ^ 22:06:14 -!- arseniiv_ has joined. 22:06:30 I don't like the anchor :P 22:06:50 -!- arseniiv has quit (Ping timeout: 240 seconds). 22:07:31 int-e: same without anchor then? 22:07:37 -!- unlimiter has quit (Quit: WeeChat 2.6). 22:07:51 well then it's no longer the FFI specifically 22:07:54 @where report 22:07:54 http://www.haskell.org/onlinereport/haskell2010/ (more: http://www.haskell.org/haskellwiki/Definition) 22:08:03 int-e: sure, but it's where you look up the ffi 22:08:08 which might not be obvious 22:08:27 I think it even has additions to the original ffi report 22:08:27 I'm happy with the link to chapter 8 22:08:31 ok 22:15:48 how did something like [miau] in English end up spelling “meow”? Prior to hearing the pronunciation I thought it should be something like [mju] and secretly thought how strange it should be to hear that from cats 22:16:04 -!- arseniiv_ has changed nick to arseniiv. 22:17:32 arseniiv_: no, that's "mew" which is a synonym 22:25:33 -!- heroux has quit (Read error: Connection reset by peer). 22:25:53 -!- heroux has joined. 22:26:07 They're all terrible approximatiopns of the real sound. 22:26:37 int-e: no surprise, because most animal calls don't follow the phonemics of any human language 22:26:47 shocking 22:26:50 so they're transcribed a bit randomly 22:26:57 *ribbit* 22:27:04 quak! 22:27:47 -!- heroux has quit (Read error: Connection reset by peer). 22:28:05 "ribbit" is pretty good, compared to that. 22:30:54 -!- heroux has joined. 22:31:53 -!- heroux has quit (Read error: Connection reset by peer). 22:35:45 -!- heroux has joined. 22:40:53 -!- heroux has quit (Read error: Connection reset by peer). 22:41:17 -!- heroux has joined. 22:43:35 it seems frogs make at least two types of sounds, one closer to ribbit and the other to qua(k)? 22:44:03 dunno, I live in a city, I rarely hear actual frogs 22:44:12 or maybe it’s just different kinds of frogs, humble and noisy 22:44:54 I heard some at various times but won’t say I had enough to decide 22:49:27 -!- heroux has quit (Read error: Connection reset by peer). 22:49:29 -!- arseniiv has quit (Ping timeout: 246 seconds). 22:49:47 -!- heroux has joined. 22:50:39 -!- heroux has quit (Read error: Connection reset by peer). 22:55:20 -!- heroux has joined.