←2019-11-01 2019-11-02 2019-11-03→ ↑2019 ↑all
00:23:07 <shachaf> Are pointers signed or unsigned?
00:23:42 <olsner> they could be
00:24:42 <shachaf> How should I think of them?
00:24:50 <olsner> what does it actually mean though? when do you have sign/zero-extension of pointers where you could tell the difference?
00:26:00 <olsner> I do like to think that x86-64 has signed pointers the way they're usually used (with kernel space in negative addresses)
00:29:43 <shachaf> Right, that's the sort of thing I was thinking.
00:29:49 <shachaf> Maybe it makes no difference.
00:30:27 -!- imode has quit (Ping timeout: 265 seconds).
00:32:20 <oerjan> . o O ( the difference is negative )
00:33:10 <int-e> shachaf: signs are pointers, so pointers should be signed, is that what you mean?
00:33:42 <shachaf> Exactly.
00:33:52 <shachaf> But then what are cosigns?
00:34:26 <int-e> They are orthogonal to signs.
00:34:56 <shachaf> 17:76 06 jbe 1f <foo+0x1f>
00:34:59 <shachaf> So confusil.
00:35:26 <shachaf> I just learned about gas "1f" label syntax a few days ago, and I kept thinking it was a 0x1f offset.
00:35:34 <shachaf> This time it actually is a 0x1f offset!
00:35:43 <int-e> :)
00:36:17 <shachaf> (To be fair, this is objdump output, so it wouldn't use the 1f syntax.)
00:37:18 <int-e> To this day I find it confusing that the offset of relative jumps is relative to the address following the current instruction.
00:37:37 <int-e> 0x19 + 0x06 = 0x1f
00:37:55 <shachaf> Yes.
00:38:20 <shachaf> In particular I was trying to figure out a jump target in gdb a few days and I computed it relative to $rip without thinking.
00:38:42 <shachaf> No, not a jump target, rip-relative addressing.
00:39:01 <int-e> It makes sense, of course (the instruction has been decoded, and correspondingly, the IP advanced, when the jump happens)
00:39:27 <int-e> (thinking in terms of *very* old processors like 8086)
00:40:29 <shachaf> Sure.
00:41:54 <shachaf> Is rip-relative addressing the same way? I guess it must be but I've already forgotten.
00:42:33 <int-e> yes it is.
00:43:02 <shachaf> Yep, I just checked.
00:43:23 <shachaf> I should know this since I implemented most of the addressing modes recently.
00:43:54 <shachaf> Though not some of the weird ones like 64-bit (%eax).
00:43:57 <shachaf> Does anyone use that?
00:44:20 <int-e> Actually I think so.
00:44:29 <shachaf> `asm addr32; mov (%rax),%rdi
00:44:30 <HackEso> 0: 67 48 8b 38 mov (%eax),%rdi
00:44:45 -!- imode has joined.
00:44:58 <int-e> Having 32bit pointers is still attractive to conserve memory.
00:45:23 <shachaf> But you can use them with 64-bit registers, can't you?
00:45:43 <int-e> Yes.
00:45:44 <olsner> since you get zero-extension for most operations, you can usually just use %rax with a 32-bit address and save a byte
00:45:51 <shachaf> At least if you write something like mov foo, %eax; mov (%rax), bar
00:46:24 <shachaf> I wonder, is %eax sign-extended when you use (%eax)?
00:46:27 <shachaf> I imagine not.
00:46:45 <int-e> But I can make up reasons... getting proper overflow behavior for (%eax + 4*%esi + 0xbase)....
00:47:17 <olsner> but what would use something like that in 64-bit code?
00:47:55 <shachaf> When do you want overflow behavior for addresses?
00:48:07 <int-e> olsner: I *would* hope that this is a purely theoretical reason :)
00:48:17 <olsner> but an interesting side-effect if the address size affects all of the address calculation or just the size of the input registers
00:49:15 <int-e> Clearly I expect that it affects the whole computation.
00:49:28 <shachaf> `asm mov (%eax,%edx), %edi
00:49:29 <HackEso> 0: 67 8b 3c 10 mov (%eax,%edx,1),%edi
00:49:36 <shachaf> `asm mov (%rax,%edx), %edi
00:49:37 <HackEso> ​/tmp/asm.s: Assembler messages: \ /tmp/asm.s:1: Error: `(%rax,%edx)' is not a valid base/index expression \ /tmp/asm.s: Assembler messages: \ /tmp/asm.s:1: Error: missing ')' \ /tmp/asm.s:1: Error: junk `)' after expression
00:49:50 <shachaf> `asm addr32; mov (%rax,%rdx), %edi
00:49:51 <HackEso> 0: 67 8b 3c 10 mov (%eax,%edx,1),%edi
00:50:23 <shachaf> `asm addr32; gs; mov (%rax,%rdx), %edi
00:50:24 <HackEso> 0: 67 65 8b 3c 10 mov %gs:(%eax,%edx,1),%edi
00:50:25 <shachaf> `asm gs; addr32; mov (%rax,%rdx), %edi
00:50:26 <HackEso> 0: 65 67 8b 3c 10 mov %gs:(%eax,%edx,1),%edi
00:50:35 <shachaf> Just write the prefixes in any order you like. So convenient.
00:50:50 <shachaf> `asm data16; mov (%rax,%rdx), %edi
00:50:51 <HackEso> 0: 66 8b 3c 10 mov (%rax,%rdx,1),%di
00:51:05 <shachaf> Golly.
00:51:30 <shachaf> `asm movq (%rax), %xmm0
00:51:31 <HackEso> 0: f3 0f 7e 00 movq (%rax),%xmm0
00:51:36 <shachaf> `asm movq (%eax), %xmm0
00:51:37 <HackEso> 0: 67 f3 0f 7e 00 movq (%eax),%xmm0
00:53:11 <int-e> oh right, that was the gas syntax for these funny addressing modes
00:53:38 <int-e> offset(%base,%index,multiplier)
00:54:36 <olsner> possible dumb reason: someone planned/built a 32-bit x86 emulator (before compatibility mode was invented?) and convinced AMD to provide support for extra-stupid JIT compilers that just add prefixes to specific instructions
00:55:35 <int-e> olsner: sorry, I lost track... reason for what?
00:55:42 <olsner> for having the 32-bit override
00:55:57 <int-e> ah.
00:56:11 <int-e> plausible enough
00:57:11 <int-e> I also bet this was rather cheap to support.
00:58:16 <int-e> In context... which is a CPU that supports real mode (which has 32 bit addressing mode via the address size prefix) and 32 bit mode support for legacy software.
01:01:07 <esowiki> [[Esolang:Introduce yourself]] https://esolangs.org/w/index.php?diff=66918&oldid=66897 * DmilkaSTD * (+179) /* Introductions */
01:02:08 <esowiki> [[Esomachine]] N https://esolangs.org/w/index.php?oldid=66919 * DmilkaSTD * (+3608) Created page with "Esomachine was made by [https://esolangs.org/wiki/User:DmilkaSTD DmilkaSTD]. Imagine we have an array with infinite length. When it starts every array index is locked (If an..."
01:09:19 <oerjan> congratulations, schlock. you might get to save the galaxy single-handed...
01:17:38 -!- arseniiv has quit (Ping timeout: 246 seconds).
01:24:17 -!- oerjan has quit (Quit: Nite).
01:25:33 -!- imode has quit (Ping timeout: 265 seconds).
01:27:05 -!- imode has joined.
01:29:53 <shachaf> What other bizarro addressing modes are there in amd64?
01:30:21 <shachaf> Also did I link this tcc SSE bug I found? https://lists.nongnu.org/archive/html/tinycc-devel/2019-10/msg00033.html
01:30:29 <shachaf> It was somewhat annoying to track down.
02:08:38 <kmc> what bizarro mode are you talking about
02:08:58 <kmc> the base + mult*index + offset mode is pretty reasonable, aside from the gas syntax for it
02:09:44 <kmc> with Intel syntax it'd be like MOV EDI, DWORD PTR [4*EAX + EDX + 7]
02:09:52 <kmc> or what have you
02:09:53 <shachaf> Sure, but you have addr32, fs/gs, rip-relative, all sorts of things.
02:10:50 <shachaf> Presumably there are some things I don't know about.
02:11:09 <shachaf> Also there are all the little details, which I think I got right?
02:11:36 <shachaf> `asm lea (%r11), %rax
02:11:37 <HackEso> 0: 49 8d 03 lea (%r11),%rax
02:11:38 <shachaf> `asm lea (%r12), %rax
02:11:39 <HackEso> 0: 49 8d 04 24 lea (%r12),%rax
02:11:57 -!- imode has quit (Ping timeout: 265 seconds).
02:12:03 <shachaf> gotta include that sib byte for r12
02:12:46 <shachaf> Of course I haven't done SSE/AVX/whatever at all, or the VEX prefix, or anything like that.
02:14:25 <kmc> so much nonsense
02:15:02 <shachaf> what instruction encoding are you into
02:15:55 <fizzie> It doesn't even have a bit-reversed addressing mode.
02:16:30 <shachaf> What's that?
02:16:40 <fizzie> It's a thing DSPs have, for speeding up FFTs.
02:16:59 <fizzie> The TI TMS320C54x at least has it.
02:17:00 <shachaf> `asm lea (%r13), %rax
02:17:01 <HackEso> 0: 49 8d 45 00 lea 0x0(%r13),%rax
02:17:10 <shachaf> Right, r13 has a special case too.
02:17:21 <kmc> oh?
02:17:35 <shachaf> But I think that one is modrm+offset rather than modrm+SIB.
02:17:44 <shachaf> fizzie: That sounds pretty fancy. I should learn about fancy DSP things.
02:17:59 <fizzie> They also have circular addressing modes.
02:18:06 <fizzie> For FIR filters and suchlike.
02:18:47 <shachaf> Speaking of circles, what's the nicest way to write a circular buffer?
02:18:56 <shachaf> I don't like having a boolean to distinguish empty from full.
02:19:30 <fizzie> You can go with head + length instead of head + tail.
02:19:43 <fizzie> Then you have 0 and N for empty and full.
02:20:14 <shachaf> Hmm, I guess.
02:20:23 <fizzie> There's also that one fancy thing that I think Chrome used somewhere, or someone used somewhere.
02:20:25 <shachaf> What about the case where you have a separate reader and writer?
02:20:33 <shachaf> I know of some other tricks:
02:20:48 <shachaf> Map two copies of the same buffer in adjacent address space, so you get a contiguous buffer.
02:20:52 <fizzie> The Bip-Buffer, that's what I was thinking of.
02:21:01 <fizzie> The Bip-Buffer doesn't need the mapping trick.
02:21:29 <fizzie> (On the other hand, it may waste some space.)
02:22:30 <shachaf> Another trick I heard about is, instead of keeping the read/written size mod the buffer size, keep the total size, and mask it at use time.
02:22:40 <kmc> bip booper
02:24:47 <fizzie> I can't find any reference to anyone actually using the bip-buffer, just a few random implementations, so maybe I imagined that.
02:25:06 <shachaf> I'm reading about it now.
02:25:08 <fizzie> spsc-bip-buffer is "#108 in Concurrency" on lib.rs, which sounds like a TripAdvisor ranking.
02:26:44 <shachaf> This explanation doesn't seem very clear.
02:28:28 <shachaf> What's the benefit of this?
02:28:49 <kmc> what is lib.rs
02:28:59 <shachaf> Is it that writes are always contiguous (but reads might not be)?
02:32:14 -!- imode has joined.
02:42:13 <fizzie> AIUI, the reads are contiguous too.
02:43:39 <shachaf> Maybe I don't understand the diagram in https://www.codeproject.com/Articles/3479/The-Bip-Buffer-The-Circular-Buffer-with-a-Twist
02:44:06 <shachaf> What happens in 5? From their description it looks like both A and B contain data.
02:44:35 <fizzie> Right, reads of multiple writes are not necessarily contiguous.
02:45:01 <fizzie> Maybe.
02:45:13 <shachaf> Hmm.
02:45:31 <shachaf> I guess the idea is that a library might want to write a fixed-size thing and you want to make sure to be able to fit it in the buffer?
02:45:44 <shachaf> And another library can also interpret that fixed-size thing since it's contiguous.
02:46:33 <shachaf> (Or, y'know, non-fixed-size.)
02:46:37 <fizzie> Or, no, maybe reads of any size can be contiguous too, it's just that in stage 5 if you wanted to read more than the orange bit some copying would be involved.
02:46:51 <fizzie> ...or maybe not.
02:47:31 <shachaf> As in copying all the data in the buffer?
02:47:37 <fizzie> Yeah, I was looking at the API, for reading you just ask the thing for the largest contiguous block.
02:47:38 <shachaf> Regular circular buffers have this property too.
02:47:44 <shachaf> Which API?
02:48:05 <fizzie> Well, the BipBuffer class described there.
02:48:20 <fizzie> But I guess it's still useful, if you (say) put length-delimited protos there.
02:48:29 <shachaf> Oh, the one on that page.
02:48:51 <shachaf> I guess that's true?
02:48:56 <fizzie> As long as you write the thing into one contiguous (reserved) block, the reading side can also read it as one contiguous block.
02:49:09 <shachaf> Right.
02:49:24 <shachaf> Maybe it would be better for APIs to support reading and writing in multiple chunks.
02:49:45 <fizzie> Maybe.
02:49:45 <shachaf> I guess there's some concern that the API will want to keep an internal buffer and do some copying in order to support that.
02:50:28 <shachaf> I think the mmap solution is better if you want things to be contiguous.
03:04:50 -!- imode has quit (Ping timeout: 240 seconds).
03:22:28 -!- imode has joined.
03:28:46 <int-e> @metar lowi
03:28:47 <lambdabot> LOWI 020320Z AUTO 27011KT 9000 FEW001 BKN002 08/07 Q1006
03:31:23 <shachaf> @metar koak
03:31:23 <lambdabot> KOAK 020253Z 00000KT 10SM CLR 13/01 A3011 RMK AO2 SLP194 T01330011 53004
04:17:47 <imode> @metar ksea
04:17:47 <lambdabot> KSEA 020353Z 01013KT 10SM SCT200 09/03 A3037 RMK AO2 SLP293 T00890033
04:27:11 -!- hppavilion[1] has joined.
04:32:45 <imode> using the thought I had earlier, you can build interesting data pipelines.
04:33:14 <imode> sum $1234 bitvector
04:33:45 <imode> or sum bitvector $1234 number
04:34:30 <imode> because you push a handle to the concurrent process to the queue, any further processes can be constructed, passed that handle, and form a linear dataflow graph.
04:35:45 <imode> a bidirectional one as well. `number` takes a number and a process to send that value to. `bitvector` takes a process, receives a number and sends the bits of that number to the taken process. `sum` takes a process, receives a number and keeps a running tally of that number which is available on request.
04:35:55 <imode> you can do lazy evaluation with that.
04:37:04 <imode> you can design a process that takes two handles, receives something and broadcasts it to the two processes it has handles to.
05:01:36 <imode> https://hatebin.com/shiyqdhisf not bad.
05:04:17 <imode> bitvector's logic is wrong, it should send zero on completion.
05:07:42 <imode> https://hatebin.com/lriwwfiijo that's better.
05:09:16 <imode> I feel like you can get pretty granular with this.
05:48:54 -!- ArthurStrong has quit (Quit: leaving).
05:59:23 -!- imode has quit (Ping timeout: 276 seconds).
06:00:46 -!- tromp_ has joined.
06:03:14 -!- tromp has quit (Ping timeout: 246 seconds).
06:09:26 -!- imode has joined.
06:14:13 <esowiki> [[Metatape]] https://esolangs.org/w/index.php?diff=66920&oldid=53872 * HactarCE * (+4400) Overhauled Metatape according to 2019 edition
07:12:11 -!- imode has quit (Ping timeout: 276 seconds).
08:26:05 -!- Phantom_Hoover has joined.
08:53:09 -!- kspalaiologos has joined.
09:01:17 -!- Phantom_Hoover has quit (Ping timeout: 240 seconds).
10:45:32 <esowiki> [[Esomachine]] https://esolangs.org/w/index.php?diff=66921&oldid=66919 * DmilkaSTD * (+0)
10:46:46 <esowiki> [[Esomachine]] https://esolangs.org/w/index.php?diff=66922&oldid=66921 * DmilkaSTD * (+15)
10:47:25 <esowiki> [[Esomachine]] https://esolangs.org/w/index.php?diff=66923&oldid=66922 * DmilkaSTD * (+10)
10:47:52 <esowiki> [[Esomachine]] https://esolangs.org/w/index.php?diff=66924&oldid=66923 * DmilkaSTD * (-7)
10:48:21 <esowiki> [[Esomachine]] https://esolangs.org/w/index.php?diff=66925&oldid=66924 * DmilkaSTD * (+10)
10:56:03 -!- tromp has joined.
10:57:33 <esowiki> [[Esomachine]] https://esolangs.org/w/index.php?diff=66926&oldid=66925 * DmilkaSTD * (+156)
10:58:59 -!- tromp_ has quit (Ping timeout: 246 seconds).
11:17:39 -!- hppavilion[1] has quit (Remote host closed the connection).
11:24:09 <kspalaiologos> has someone taken up on esoshell project?
11:36:37 -!- arseniiv has joined.
13:10:39 -!- kspalaiologos has quit (Quit: Leaving).
13:19:01 -!- b_jonas has joined.
13:24:21 <b_jonas> kspalaiologos: I beg to differ, but I can write usable parsers from scratch. just don't look at my ancient psz interpreter. that was long ago, and I've matured since.
13:26:13 <b_jonas> "<kspalaiologos> I shouldn't have slept at math lessons" => meh, it's quite possible that many of your lessons were a waste of time. get some good books and learn from them instead.
13:27:34 <b_jonas> "<shachaf> Are pointers signed or unsigned?" => I don't think that distinction makes sense there. you don't high-multiply pointers, or compare pointers from two different arrays
13:29:13 <b_jonas> but if I have to choose, they're probably signed on x86_64 (because the top bits are usually the same unless you have a future cpu with a 2**64 bit long address space), unsigned on x86_16 (because they are mapped into x86_32's address space by zero filling),
13:32:01 <esowiki> [[Kill]] N https://esolangs.org/w/index.php?oldid=66927 * CMinusMinus * (+723) Created page with "'''Kill''' is a one-word, Python-interpreted, joke programming language created by [[User:CMinusMinus]]. The sole purpose of this language, is to delete the code. The only leg..."
13:35:27 <b_jonas> shikhin: for x86_32 though, signed vs unsigned does make a difference, and I don't know which one is used. either look it up in the ELF ABI docs, or allocate a 2.5 GB sized array (for which you need either an x86_64 kernel, or an x86_32 kernel configured to the slower 3GB+1GB address space split rather than the default 2GB+2GB split) and see how it's layed out and how pointers in it compare
13:35:33 <b_jonas> argh
13:35:39 <b_jonas> s/shikhin/shachaf/
13:35:49 <b_jonas> I suck at autocompletion
13:36:13 <esowiki> [[Kill]] https://esolangs.org/w/index.php?diff=66928&oldid=66927 * CMinusMinus * (+102)
13:36:32 <b_jonas> oerjan: ^
13:37:17 <b_jonas> "<int-e> To this day I find it confusing that the offset of relative jumps is relative to the address following the current instruction." => I find that one natural, and the other convention (which some cpu archs use) unnatural
13:37:42 <esowiki> [[Kill]] https://esolangs.org/w/index.php?diff=66929&oldid=66928 * CMinusMinus * (+22)
13:40:32 <esowiki> [[Language list]] https://esolangs.org/w/index.php?diff=66930&oldid=66864 * CMinusMinus * (+11) Added "Kill" Language
13:40:57 <esowiki> [[Kill]] https://esolangs.org/w/index.php?diff=66931&oldid=66929 * CMinusMinus * (+2)
13:42:57 <esowiki> [[Kill]] https://esolangs.org/w/index.php?diff=66932&oldid=66931 * CMinusMinus * (+3)
13:43:18 <b_jonas> "<shachaf> Maybe it would be better for APIs to support reading and writing in multiple chunks." => they already do, if you mean multiple chunks in memory assembled to a single chunk in the file descriptor or back, with preadv/pwritev, plus the aio api eg. aio_write is parametrized like that by default (I wanted to say "works like that by default" but it's probably not correct to use "works" for the
13:43:24 <b_jonas> linux aio api at all)
13:43:55 <b_jonas> hmm no, I remembered wrong, aio_write doesn't use preadv-style scatter-gather addressing
13:44:10 <esowiki> [[User:CMinusMinus]] https://esolangs.org/w/index.php?diff=66933&oldid=66903 * CMinusMinus * (+27)
13:44:10 <shachaf> I'm not talking about OS APIs, which support this already, but other APIs.
13:44:12 <b_jonas> what api was it than otehr tahn preadv/pwritev, I wonder? I'm sure there was another
13:44:24 <shachaf> Presumably that's also what fizzie is talking about also.
13:44:38 <shachaf> Just some arbitrary function in your code like parse_thing() that takes a buffer and a length.
13:44:43 <b_jonas> what other APIs then?
13:44:47 <b_jonas> ah
13:45:25 <b_jonas> shachaf: I think https://laurikari.net/tre/ allows you match a regex to a string that is not continuous, and even from a string that's read lazily
13:45:56 <shachaf> OK, but regex matching is one special-case API which is already naturally written as a state machine anyway.
13:46:17 <b_jonas> but of course continuous buffers have a lot of advantage
13:46:20 <b_jonas> easier to optimize
13:46:42 <b_jonas> I worked with bitmap images at my previous job, and I wouldn't like a non-continuous bitmap image
13:49:01 <b_jonas> if I was given one, I'd just copy it into a proper continuous buffer (that is also aligned so that its rows are padded to a size that is 64 bytes long modulo 128 bytes; possibly padded a little at the beginning and end so I can read past the ends; and with the color channels either together and padded as if you had four channels if the input has three, or separately each one in a layer, depending on
13:49:07 <b_jonas> what I want to do with the image)
13:57:06 <shachaf> Of course APIs can do that, and keep their own buffers.
13:57:29 <shachaf> But then you have a bunch of different buffers all the over the system, which doesn't seem that nice.
14:10:23 <b_jonas> shachaf: no, in my experience once you have a continuous buffers, I could use them with multiple apis in place for image processing
14:10:50 <b_jonas> there are subtleties about pixel formats, but in practice most of the time I didn't have to do unnecessary copies
14:11:49 <shachaf> OK, but maybe you have one buffer for parsing an HTTP request, and then another buffer for parsing the image it contains, or whatever.
14:12:10 <shachaf> Presumably you want to avoid a bunch of copies if you can.
14:12:25 <b_jonas> shachaf: the HTTP buffer has compressed images
14:12:40 <shachaf> Sure, another buffer for decompression.
14:12:46 <b_jonas> I have to decode those to raw uncompressed anyway if I want to work with
14:12:58 <shachaf> I'm describing the kind of thing you might want to avoid.
14:13:06 <shachaf> Can your decompression algorithm operate directly on the circular buffer?
14:13:06 <b_jonas> but in practice when I get an image from HTTP, I save it for multiple uses rather than process directly
14:14:10 <b_jonas> shachaf: hmm, I don't know the details, I usually decompressed images with either ImageMagick or ffmpeg, and read them from a regular file
14:16:01 <b_jonas> shachaf: for decoding video, I did store the uncompressed frames sparsely, so each frame can be anywhere in memory and they can be reused as a circular buffer
14:20:00 <b_jonas> shachaf: when the video is read from network directly, rather than local file, then ffmpeg does the reading, so I don't know what kind of buffer it uses
14:20:50 <b_jonas> admittedly I used ffmpeg as a separate process, so there are two copies of the uncompressed raw data there
14:20:58 <b_jonas> so I guess I was wrong above
14:21:09 <b_jonas> three copies if I want a planewise format
14:45:18 <shachaf> Running a separate process for video decoding is obviously not reasonable for any kind of special-purpose application.
14:56:54 -!- Phantom_Hoover has joined.
15:36:24 -!- kspalaiologos has joined.
16:19:45 <int-e> . o O ( Prove or disprove: There is a POSIX extended regular expression of length shorter than 10000 that accepts the multiples of 7 in decimal, with leading zeros allowed. )
16:20:31 <int-e> *Main> length rex ==> 10791
16:21:52 <int-e> Which doesn't include the anchors ^( and )$, so 10795 is where I'm really at.
16:23:50 <int-e> Make that 10793 (the parentheses are not required). Oh and I'm excluding the empty string but as far as I can tell this doesn't affect the length anyway; it's a matter of using + or * in one place.
16:32:46 -!- Phantom_Hoover has quit (Ping timeout: 265 seconds).
16:45:46 -!- Phantom_Hoover has joined.
16:51:22 -!- xkapastel has joined.
16:59:32 -!- lldd_ has joined.
17:26:20 -!- imode has joined.
17:32:33 -!- imode has quit (Quit: WeeChat 2.6).
17:33:32 -!- imode has joined.
17:41:43 -!- lldd_ has quit (Quit: Leaving).
19:04:36 -!- Phantom_Hoover has quit (Ping timeout: 240 seconds).
19:15:25 -!- imode has quit (Ping timeout: 268 seconds).
19:16:58 -!- Cale has quit (Ping timeout: 245 seconds).
19:29:06 -!- kspalaiologos has quit (Quit: Leaving).
19:29:12 -!- Cale has joined.
19:35:22 -!- imode has joined.
19:53:29 <b_jonas> int-e: eww.
19:54:03 <b_jonas> int-e: also isn't it ^[[:space:]][-+]( )$
19:54:09 <b_jonas> no wait
19:54:17 <b_jonas> ^[[:space:]][-+]?( )$
19:59:03 -!- Phantom_Hoover has joined.
19:59:49 -!- imode has quit (Ping timeout: 268 seconds).
20:00:54 -!- imode has joined.
20:08:05 <int-e> b_jonas: nah, no signs
20:14:36 <myname> int-e: if i want to be picky, i'd say .* doas accept the multiples of 7
20:16:43 <int-e> myname: yeah but you know what I meant anyway
20:17:34 <int-e> Also obviously the right way to write such a regular expression is to not do it. :P
20:17:49 <int-e> (But the second best way is to write a program to do it for you.)
20:38:53 <b_jonas> int-e: yeah, there are programs that can automatically convert a nondet finite automaton to a regex, even with the blowup
20:39:16 <b_jonas> I know of one
20:39:30 <b_jonas> but there are probably more because it's a known algorithm
20:40:08 -!- imode has quit (Ping timeout: 276 seconds).
20:40:26 <int-e> sure
20:41:08 -!- xkapastel has quit (Quit: Connection closed for inactivity).
20:41:31 <int-e> But do they also try to optimize the result size...
20:44:40 <b_jonas> int-e: obviously the regex would be shorter in perl regex syntax, where you can use the "recursion" feature, not to build recursive regex, but to reuse longer regex multiple times
20:44:59 <int-e> yeah that would definitely help
20:52:31 <olsner> hm, surprisingly large blowup from such a reasonably sized state machine
20:52:57 -!- dingwat has quit (Quit: Connection closed for inactivity).
20:55:19 <int-e> it's easily O(3^n) where n is the number of states
20:55:58 -!- oerjan has joined.
20:56:47 <int-e> So... let me try... 5 states (remainders 0..4 only): 689; 6 states: 2701; 7 states: 10793
20:57:31 <int-e> That really looks a bit worse than O(3^n). But of course the number of states is still small.
20:57:44 <int-e> But wait. O(4^n) actually makes more sense.
20:57:57 <int-e> And it looks pretty close to that.
20:58:04 <int-e> Hi oerjan.
20:59:11 <oerjan> hi int-e
20:59:56 -!- MDude has joined.
21:04:49 <int-e> But eh. My (fairly primitive) code is here: http://paste.debian.net/1113236/ ... it's optimizing, including a small peephole optimization (intelligently choosing between [07] and 0|7 depending on context), but fundamentally the question is whether there is a better way to convert a DFA (which happens to be a minimal NFA for the purpose) to a regexp than removing states one by one.
21:06:19 <int-e> And I just don't know the answer to that question.
21:06:38 <olsner> I tried a bit with https://github.com/qntm/greenery, it seems to always produce a regexp that converts back to the same DFA (which I suspect is not optimal for making a short regexp)
21:07:43 <int-e> Well this is inherently a DFA... you have 7 remainders to keep track of, so that's a minimum of 7 states, and if you use 7 states then you'll be dealing with a DFA.
21:08:08 <oerjan> <b_jonas> oerjan: ^ <-- i have no idea why you pinged me there
21:08:22 <oerjan> unless it was to joke about autocompletion
21:08:38 <oerjan> (in which case you need to work on your jokes)
21:09:17 <int-e> Maybe b_jonas wanted to highlight me. Which would've been appropriate. :)
21:09:23 <oerjan> heh
21:21:15 -!- imode has joined.
21:21:15 <arseniiv> what books on numeric recipes related to floating-point (or esp. IEEE 754) issues could you recommend? With recipes for inverse hyperbolic functions or e. g. if there is a sense to define `coshm1(x) := 0.5 * (expm1(x) + expm1(-x))` or one should just use plain `cosh(x) - 1`
21:27:23 <int-e> fun question....
21:28:02 -!- heroux has quit (Ping timeout: 240 seconds).
21:28:03 <b_jonas> oerjan: sorry, that should have highlighted olsner
21:28:39 <b_jonas> fizzie: the https://esolangs.org/logs/all.html website seems to be down
21:28:57 <int-e> I mean, cosh(x) - 1 suffers from terrible cancellation around 0, but 0.5 * (expm1(x) + expm1(-x)) still suffers from cancellation (expm1(x) = x + x^2/2 + O(x^3), expm1(-x) = -x + x^2/2 + O(x^3), cosm1(x) = x^2/2 + O(x^3)...)
21:30:35 <int-e> So exp1m(log1p(sinh(x)**2)/2) may be better.
21:31:06 <b_jonas> arseniiv: the fourth edition of Knuth volume 2, only it's not yet written
21:31:07 <int-e> Modulo function names.
21:31:49 <arseniiv> b_jonas: :(
21:32:31 <arseniiv> int-e: ah, I suspected my definition would have a flaw
21:32:51 <b_jonas> ah sorry, that will be third edition
21:32:54 <b_jonas> no wait
21:32:56 <b_jonas> fourth edition
21:33:06 <b_jonas> anyway, until that time, you can look at the existing third edition
21:33:31 <b_jonas> it doesn't talk about IEEE 754, but it does talk about floating point in general
21:34:03 -!- atslash has quit (Quit: This computer has gone to sleep).
21:34:07 <b_jonas> MIX uses a different floating point format that shifts by mix bytes, rather than bits, but the main text considers other bases too, including base 2
21:35:44 <b_jonas> what the current edition doesn't consider is features specific to IEEE 754, which are infinities and NaNs
21:36:23 <fizzie> b_jonas: I'm not sure what's up with it, my monitoring has been saying every now and then that it's down for a bit.
21:36:27 <fizzie> Working for me now.
21:36:35 <arseniiv> b_jonas: mix bytes => wait, there are its own bytes? How many bits?
21:36:49 <b_jonas> arseniiv: either six bits, or two decimal digits
21:37:28 <b_jonas> arseniiv: technically the book says the byte has a range from 0 to a maximum that is between 63 and 99 inclusive, so a binary MIX goes up to 63, a decimal up to 99, a ternary up to 81
21:37:42 <b_jonas> arseniiv: see our wiki article
21:37:51 <b_jonas> (and the book itself)
21:38:30 <arseniiv> I wonder if MIX-related issues don’t make the text obscurer
21:39:17 <arseniiv> yeah, I was to look for searching if I have it somewhere
21:39:22 <arseniiv> don’t remember
21:40:27 <b_jonas> have what?
21:40:30 <b_jonas> the books?
21:43:22 -!- OugiOshino has changed nick to BWBellairs.
21:44:08 <arseniiv> b_jonas: hm I don’t seem to find there much of the redundant recipes I was to look for
21:45:11 <b_jonas> fizzie: yes, it's up now
21:45:13 <arseniiv> b_jonas: yeah, it seems I have that volume here, but the contents page doesn’t look too promising
21:45:21 -!- heroux has joined.
21:46:08 <arseniiv> I mean, for basics I have that “What every computer scientist should know about FP arithmetic” article reprint-as-an-appendix-from-some-Sun-manual
21:46:40 <int-e> texlive's documentation packages are ridiculously big
21:47:45 <arseniiv> but the careful examination of numeric issues by myself seems unnecessary if… hm I wonder if I should look at Numpy code
21:49:00 <b_jonas> arseniiv: TAOCP vol 2 almost certainly isn't enough for what you asked,
21:49:08 <b_jonas> but I'm not familiar with other books to recommend
21:49:25 <b_jonas> I haven't read many such books really
21:49:45 -!- heroux has quit (Read error: Connection reset by peer).
21:49:46 <int-e> I'm aware that there *are* numerical recipe books...
21:50:11 -!- heroux has joined.
21:50:41 <arseniiv> b_jonas: ah, OK
21:51:05 <arseniiv> int-e: yeah, they just seem elusive
21:51:21 <b_jonas> there's Stoyan Gisbert's numerical analysis textbook, which is freely available online, but I think only exists in Hungarian. I don't know if there's any translation
21:51:57 <int-e> @where floating-point
21:51:57 <lambdabot> "What Every Programmer Should Know About Floating-Point Arithmetic" at <http://floating-point-gui.de/> and "What Every Computer Scientist Should Know About Floating-Point Arithmetic" by David
21:51:57 <lambdabot> Goldberg in 1991 at <http://docs.sun.com/source/806-3568/ncg_goldberg.html> and <http://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.102.244>
21:52:01 <olsner> b_jonas: finally a good excuse to learn hungarian?
21:52:01 <b_jonas> it has three volumes, the first one is an introduction one that goes pretty far, and then the second and third are about solving partial differential equations numerically
21:52:21 <b_jonas> therea are certainly more good books, I'm just not familiar with them
21:52:57 <b_jonas> for the logs, IIRC Stoyan Gisbert's book is available somewhere from http://www.tankonyvtar.hu/hu/bongeszes , but that server is down right now
21:53:31 <b_jonas> it says that it's down until 2019-11-03 though, so unless you see the date autoincrement, it'll hopefully come up later
21:53:39 <arseniiv> <b_jonas> and then the second and third are about solving partial differential equations numerically => (aaaah!! you know, this is the night here, how would I go to sleep now)
21:54:19 <arseniiv> (I’m afraid of numeric PDEs after my naive Shrödinger model blown up)
21:54:44 <b_jonas> arseniiv: right, the whole thing is so tricky that it's no wonder you need two volumes on it
21:54:50 <int-e> pfft.
21:54:51 -!- heroux has quit (Read error: Connection reset by peer).
21:54:56 <int-e> @where ffi
21:54:56 <lambdabot> http://www.cse.unsw.edu.au/~chak/haskell/ffi/
21:55:00 <b_jonas> I think the first volume covers ODEs and numerical integration
21:55:10 <b_jonas> int-e: lol
21:55:22 -!- heroux has joined.
21:55:29 <int-e> dead link tjhough
21:56:32 <b_jonas> int-e: https://www.haskell.org/onlinereport/haskell2010/haskellch8.html#x15-1490008 is probably the current one
21:56:44 <b_jonas> it's integrated to the main standard from the separate tech report
21:56:57 <int-e> @where+ ffi http://www.haskell.org/onlinereport/haskell2010/haskellch8.html
21:56:57 <lambdabot> Nice!
21:57:29 <b_jonas> @hwere ffi
21:57:29 <lambdabot> http://www.haskell.org/onlinereport/haskell2010/haskellch8.html
21:57:47 <int-e> yeah that's what I copied
21:59:33 <b_jonas> that's not standalone though, you need most of chapters 24 to 37 inclusive
21:59:38 <arseniiv> @ʍere ffi -- just testing
21:59:38 <lambdabot> http://www.haskell.org/onlinereport/haskell2010/haskellch8.html
21:59:40 <arseniiv> :o
21:59:42 <b_jonas> which have the relevant Foreign modules
22:00:26 -!- unlimiter has joined.
22:00:32 <b_jonas> eg. https://www.haskell.org/onlinereport/haskell2010/haskellch28.html#x36-27400028 defines the Foreign.C.CLong type
22:03:18 <b_jonas> so perhaps https://www.haskell.org/onlinereport/haskell2010/haskell.html#QQ2-15-159 would be a better link
22:03:24 <b_jonas> int-e: ^
22:06:14 -!- arseniiv_ has joined.
22:06:30 <int-e> I don't like the anchor :P
22:06:50 -!- arseniiv has quit (Ping timeout: 240 seconds).
22:07:31 <b_jonas> int-e: same without anchor then?
22:07:37 -!- unlimiter has quit (Quit: WeeChat 2.6).
22:07:51 <int-e> well then it's no longer the FFI specifically
22:07:54 <int-e> @where report
22:07:54 <lambdabot> http://www.haskell.org/onlinereport/haskell2010/ (more: http://www.haskell.org/haskellwiki/Definition)
22:08:03 <b_jonas> int-e: sure, but it's where you look up the ffi
22:08:08 <b_jonas> which might not be obvious
22:08:27 <b_jonas> I think it even has additions to the original ffi report
22:08:27 <int-e> I'm happy with the link to chapter 8
22:08:31 <b_jonas> ok
22:15:48 <arseniiv_> how did something like [miau] in English end up spelling “meow”? Prior to hearing the pronunciation I thought it should be something like [mju] and secretly thought how strange it should be to hear that from cats
22:16:04 -!- arseniiv_ has changed nick to arseniiv.
22:17:32 <b_jonas> arseniiv_: no, that's "mew" which is a synonym
22:25:33 -!- heroux has quit (Read error: Connection reset by peer).
22:25:53 -!- heroux has joined.
22:26:07 <int-e> They're all terrible approximatiopns of the real sound.
22:26:37 <b_jonas> int-e: no surprise, because most animal calls don't follow the phonemics of any human language
22:26:47 <oerjan> shocking
22:26:50 <b_jonas> so they're transcribed a bit randomly
22:26:57 <b_jonas> *ribbit*
22:27:04 <int-e> quak!
22:27:47 -!- heroux has quit (Read error: Connection reset by peer).
22:28:05 <int-e> "ribbit" is pretty good, compared to that.
22:30:54 -!- heroux has joined.
22:31:53 -!- heroux has quit (Read error: Connection reset by peer).
22:35:45 -!- heroux has joined.
22:40:53 -!- heroux has quit (Read error: Connection reset by peer).
22:41:17 -!- heroux has joined.
22:43:35 <arseniiv> it seems frogs make at least two types of sounds, one closer to ribbit and the other to qua(k)?
22:44:03 <b_jonas> dunno, I live in a city, I rarely hear actual frogs
22:44:12 <arseniiv> or maybe it’s just different kinds of frogs, humble and noisy
22:44:54 <arseniiv> I heard some at various times but won’t say I had enough to decide
22:49:27 -!- heroux has quit (Read error: Connection reset by peer).
22:49:29 -!- arseniiv has quit (Ping timeout: 246 seconds).
22:49:47 -!- heroux has joined.
22:50:39 -!- heroux has quit (Read error: Connection reset by peer).
22:55:20 -!- heroux has joined.
←2019-11-01 2019-11-02 2019-11-03→ ↑2019 ↑all