00:04:59 there should be a fungeoid with subroutines defined as rectangles anywhere so to call them you reach them from any of 4 sides and any row and column within their size; they take the arg and start their work internally from the local (0,0); not sure about the exit 00:05:15 oh maybe you should exit from the directly the other side of where you came from 00:07:33 the difference from the befunge is that there is a toroidal wrapping inside the subroutine so you can't exit its bounds unless using a special "return" instruction 00:08:06 so you are free to wrap around without worrying about where exactly you place the subroutine within the whole program 00:08:18 just connect them with "wires" of <>v^ 00:09:06 Heh, that's novel. I think there's some "Befunge with functions" variants, but they have "non-physical" function calls, instead of having to route the execution into the function (and handle returns as well). 00:09:50 At least with the "exit from the opposite side" approach you wouldn't need the usual "push a 'return address' and switch on exit" kind of thing to make a subroutine you can use from more than one place. 00:10:25 As long as you route each call in from a different direction, anyway. 00:13:23 you can call some small rect subroutine from thousand of places probably if you attach a "bus" to it 00:13:42 -!- Koen__ has quit (Quit: Leaving...). 00:14:10 some another block with a plenty of inputs and outputs but calling the "small subroutine" in one place 00:16:09 and then such embedding subroutines into rects allows you to create a address-agnostic drag-n-drop IDE with autotracing 00:18:40 Yes, but then you need a return switch. 00:19:31 If you have a "bus", you lose information where it was called from, meaning you'll have to push something on the stack and then branch on that something on the way back. 00:19:48 maybe I used wrong word 00:20:49 or not 00:20:55 ah I understand 00:21:32 yeah one single output pin address 00:23:37 you can create/resize such buses automatically, while the subroutine knows nothing about it 00:26:37 There's a few "subroutine"-like sections in fungot, like the brainfuck-to-bytecode translation, which is used by both `^def foo bf ...` and `^bf ...`, and those are done using that mechanism. The vertical sequences on the rightmost edge of lines 229-231, 235-240 and 262-267 are the three different destinations a program preparation subroutine can return to (the ^bf, ^ul and ^def handlers). 00:26:48 fizzie: or was it netscape?) was beaten up in otaniemi by fellow students. 00:28:24 Huh, I wonder what the context for that second part is. 00:28:55 is that DEFine and UnLearn or something? 00:29:29 That's ^def for define, and ^bf + ^ul for straight-up immediate execution of brainfuck and Underload, respectively. 00:29:44 ah 00:30:49 Have to say, for large Befunge programs some sort of auto-routing tool would probably be useful. 00:31:33 ais523: Do you think a modern instruction set would actually have Checkout-style instructions? 00:32:07 It's not clear to me whether "the things a CPU actually does" make for the best encoding to give a program to a CPU. 00:32:12 shachaf: I think Checkout is too extreme, and probably a bad idea because letting the processor infer things saves bandwidth 00:32:18 Compared to out-of-order execution and register renaming and so on. 00:32:35 modern instruction sets do have a few Checkout-style instructions; some of them aren't widely used because actually decoding the instruction costs more than the hint saves 00:32:52 Register renaming is a good example, I guess. A CPU has hundreds of registers, but you want instructions to be compact, so you may not actually want instructions to be able to address all of them. 00:33:16 there are a few instructions for manual cache control that do genuinely seem to help in practice, though, and that nonetheless aren't widely used 00:33:17 I mean, a good example of the general thing I'm getting at, not something Checkout-specific. 00:33:46 in particular, nontemporal read-write is easily worth the cost of decoding if you have an algorithm where it's useful 00:33:52 * non-temporal read/write 00:34:48 I have test programs which read and write memory in lots of different ways, and nontemporal memory access is the only thing that makes a real difference, due to where the bottlenecks are 00:36:04 (non-temporal memory operations say "I want to read/write this memory now, but don't subsequently plan to read/write the memory again until after it's fallen out of cache"; in theory they work at cache line granularity, but the API for them reads in smaller chunks and you write several instructions in a row to cover the whole cache line or the part you care about) 00:36:26 "some sort of auto-routing" this all will break if there are are g and p 00:36:48 and this is a pretty common performance pattern to have, whereas if you just use the default caching rules, you have all this data that's crowding things out of the cache and is completely useless 00:37:23 also, even without the extra cache pressure, non-temporal is a little faster at writing to / reading from main memory (but slower for writes/reads that would go to/come from cache, for obvious reasons) 00:37:53 it's a hint rather than a promise, too, so there's no undefined behaviour if you actually do need the value earlier than you said you would, just the processor has to slow down a little to find it 00:38:48 Man, modern CPUs are so complicated. 00:39:26 One thing I learned that was sort of surprising is that the common case (?) for instructions that have a register input isn't to read from the register file. 00:39:44 right, the permanent register file hardly ever gets used in practice 00:39:49 At least for out-of-order processors, where the input for the instruction was likely just made available. 00:39:57 So instead most things go through the bypass network, or something. 00:39:59 Hmm, well. In fungot's case, almost all g/p are to the first few rows (because the addresses are shorter), plus the "negative space", so as long as that's kept clear, it'd be fine. 00:40:05 only when you haven't touched a register for so long that the reorder buffer has forgotten the value, but you surprisingly actually still care about the value 00:40:07 fizzie: or actually, don't. but if you don't use empty in production code 00:40:24 Also in Funge-98 g/p are local to the storage offset, so subroutines that need "local" fungespace storage *could* be position-independent. 00:40:34 (the permanent register file normally isn't actually any slower to read than the reorder buffer, though, although a few processors can't read it twice in the same cycle) 00:40:37 (You'd just need to reset and restore the storage offset properly.) 00:42:38 out-of-order execution bothers me a little because it's mostly just working around bad compilers; the only times it gives an advantage over putting the instructions in the correct order to start with are a) when you want to perform some instructions from inside a loop after it's ended / before it's started (so you'd need to peel a few iterations to reorder manually), or b) when instructions take varying times to execute due to caching effects 00:42:49 b) is a pretty big reason, though 00:43:19 b seems like most of the point, doesn't it? 00:43:32 yes 00:43:43 Good compilers can't do much about that. 00:43:48 although, I'm not sure it actually speeds things up that much in practice 00:44:13 The Mill people claim it doesn't, and you can get away with static scheduling if you're clever. 00:44:16 I'm not sure. 00:44:31 if your data is in main memory the reorder buffer won't help because it'll fill up before you get the data you need 00:44:46 The sequence 0{2u02-u2} sets an arbitrary storage offset (, ), and leaves the previous storage offset on the stack. 00:44:58 if it's something like L1 versus L2 cache then it might help, though, the difference between those is only a few cycles 00:46:19 The Apple M1 has a 600-entry reorder buffer or something like that, people say. 00:46:29 but, only when the dependency chains were short enough that you're bound on L2 latency 00:46:30 But helping with L2/L3 is surely still a big deal. 00:47:20 Someone gave this example: "x = *p; y = *q; [process x]; [process y]; result = x + y;" 00:47:27 I'm finding it hard to think of an application where a) data is frequently in L2 or L3, but b) you can't just move the read of it earlier, and c) all this somehow forms a dependency chain so that you end up with a loop-carried dependency 00:47:43 shachaf: what are you doing with result? 00:48:01 in order to get a meaningful slowdown this needs to be in a loop and result needs to influence p or q somehow 00:48:28 and, *p and/or *q need to be in unpredictable cache hierarchy levels 00:48:41 it certainly seems possible, but this doesn't seem like a common case 00:49:02 some sort of linked list traversal, perhaps? but why would the list be in varying cache levels? 00:49:53 didn't work with local offsets 00:50:00 I just made RASEL to not bother ..D 00:50:30 I don't think I've ever actually used the storage offset, I think it's mostly a bother in at least small-to-medium sized programs like fungot. 00:50:38 fizzie: his syntax is a non-standard extension to receive, though, the empty 00:50:41 I guess I even have implemented them in my half-done funge-98 interpreter but didn't test it 00:51:31 wait, non-temporal read? I haven't considered that for cpu caching, probably because neither x86 nor MMIX seems to have it. is it actually worth? for reading files non-temporary it exists (posix_fadvise and madvise), but that's different 00:52:03 whereas SSE2 (or SSE1, I don't know which) has a non-temporal write instruction or two 00:52:48 I feel like I more often want to refer to "absolute" addresses from all around the program (as the equivalent of "global variables"), or just very temporary scratch space (which can be anywhere) as opposed to needing "scoped" local storage. Though it might be useful for position-independent self-modifying code too. 00:53:16 I don't think non-temporal read is even reasonably possible, as in, just supporting it would require too much cost for programs that never use it 00:53:21 MOVNTDQA is the non-temporal read 00:53:25 SSE4.1, apparently 00:53:42 although AVX/AVX2 also implement it 00:53:52 Hmm, I was just imagining for (i = 0; i < n; i++) { x = a[i]; y = b[i]; ... } 00:54:34 shachaf: you can prefetch x and y in that example 00:54:52 "the permanent register file" => there is a permanent register file? I thought registers always lived on the one temporary register file, and that one is huge because it has to save enough register values to roll back to several different points before instructions that are not yet finalized 00:54:53 modern processors have prefetch hints for this, but don't actually need them; they'll notice you're accessing incrementing addresses and pull the next few elements into cache just in case 00:55:07 so it's faster to not give the prefetch hints because they're effectively no-ops and tkae some time to decode 00:55:11 why would there be a separate permanent register file? 00:55:27 I mean maybe for special registers that never go to the normal register file 00:55:39 b_jonas: in case you haven't used a register for so long that it isn't in the register renamer 00:55:55 but those wouldn't be in a file I think, unless they're so useless they're basically never used for anything on a fast path 00:56:01 I think the renamer works by changing instructions from "eax" to "internal register 54" or whatever 00:56:21 but, if you don't use eax at all for a while, there'll be no instructions to rename, so it needs a permanent file to say "this register is rax" 00:56:42 you could just have a permanent place to store "rax is register 54" but you may as well store the value of rax there, rather than a reference to it, and save one register 00:57:10 out of order execution => hmm, that's an interesting point. I never thought of it like that, because pentiums were already out when I started to try to understand what x86 even does, so I took out of order execution for granted 00:57:54 in rasel there is a "problem" about the stack addressing such that the address often has to be adjust by 1, 2, 3 depending on in which place and which "routine" you use it 00:58:03 because the stack size is always different 00:58:41 once you add something somewhere in the middle of the routine all the following usings of that stack address should know it's not 1 off 00:58:45 *now 00:58:48 although I'm still quite sure I don't understand cpus enough to be able to figure out what would make their design better or worse, I'm still at the stage where I mostly try to understand how to use the existing (future high-end x86) cpus well, and a very little of understanding why some of the apparently very odd designs in the cpu may potentially be worth even though it doesn't seem like they can ever 00:58:54 be 01:00:45 ais523: doesn't out of order execution also help because it allows for more compact code, as in more instructions that can only be coded in-place, or fewer registers that you can name? 01:00:51 so it would be a "common operation" to 1. take the stack address at N and then 2. add N to it before using it in swapn 01:02:20 b_jonas: oh right, yes, I knew that once but I'd forgotten about it 01:02:31 one big benefit of out-of-order execution is saving on register names in the machine code 01:02:55 because it lets you place instructions in the order that uses the fewest register names, rather than needing to look for the most efficient order (which generally involves spreading dependency chains out through the code) 01:03:07 thanks for reminding me of this, that's a very big reason to have out of order execution 01:04:11 MOVNTDQA => you're right. I'm just stupid then. that is a genuine useful non-temporal read that I absolutely should have known about. 01:04:53 ais523: oh, and more importantly denser code because you can have instructions that reference memory directly; without instruction reordering you'd need a separate load (or prefetch) and arithmetic instruction 01:04:57 it's confusing because the difference between -A and regular instructions is normally alignment 01:05:12 but, MOVNTDQ and MOVNTDQA are both aligned-only, the difference is write versus read 01:05:15 that's probably even more important than the register count and stuff 01:05:21 I had to read the documentation about three times to figure out what the difference was 01:06:04 (and then I guess you'd also need more register names for those reads) 01:06:21 basically any time you load from outside L2 cache, you couldn't combine it with the instruction that uses that load 01:06:32 because the CPU couldn't reorder anything between 01:06:45 but a real pentium (or other reordering cpu) will reorder instructions between those 01:07:21 is there any online dictionary that would immediately give a meaning of MOVNTDQA? because I thought it's some chat abbreviation and when pasting to google I hoped for Ubran Dictionary article but nope ) 01:07:44 it's an x86_64 instruction, most of the newer ones have really weird names 01:08:12 nakilon: the "Intel 64 and IA-32 Architectures Software Developer's Manual" which you can download from somewhere on intel.com, 01:08:13 to the extent that you can often figure out how new an instruction is by counting how many letters are in its name and how unpronounceable it is 01:08:18 nakilon: as well as the equivalent AMD manual 01:08:28 I normally use the AMD manual but I have both 01:08:29 I would make a universal definition lookup IRC bot command that would try to ask other channel bots and then if failed go to online dictionaries but this one is an example that I won't know where to lookup at 01:08:42 I used to use the AMD manual, but now prefer the Intel 01:08:54 nakilon: we could add the instructions to our whatisdb I guess 01:09:33 how large is whatisdb? where is it? 01:09:35 the problem here is that it's a proper noun – the name of an instruction – which makes it hard to look up unless you know where it comes from 01:09:37 "figure out how new an instruction is" => lol 01:09:51 `whatis grep 01:09:53 grep(1) - print lines that match patterns \ grep(1p) - search a file for a pattern 01:10:03 `whatis wisdom 01:10:07 wisdom(1hackeso) - print random wisdom matching a string \ wisdom(5hackeso) - no description 01:10:18 ``` wc /hackenv/share/whatis # nakilon: 01:10:19 ​ 14662 97242 753236 /hackenv/share/whatis 01:11:56 holy shit, 14k definitions? 01:12:05 nakilon: a lot of that is stub entries 01:12:08 it's like some little country language 01:12:14 ah ok 01:12:22 the last time I downloaded the instruction list, the newest instructions were along the lines of VFNMSUB231PS 01:12:39 there are of course counterexamples like ADCX and ANDN and CMPXCHG8B 01:13:06 we're onto the AVX-512 era now, rather than the FMA wars era, but I still don't use AVX-512 because few people have a processor that handles it 01:13:10 ais523: I think those are just the highly advertised ones, the ones you need for matrix product benchmarks and occasionally for actual matrix products 01:13:39 b_jonas: well it does solve an actual problem that's hard to solve without processor help 01:14:06 oh, and CVTTPS2DQ which is from SSE2 but looks newer 01:14:08 although, it introduces yet more floating point inconsistency between processors, because you have an instruction for an accurate floating-point a * b + c 01:14:12 lol https://i.imgur.com/rc9wr9F.png 01:14:30 but, you need a pretty new FPU to use it 01:14:41 CMPXCHG8B is from pentium 01:14:46 so, do you use the instruction, in the knowledge that older processors won't be able to do the same thing and thus will produce a different result? 01:15:13 CMPXCHG8B is still only four words, though 01:15:41 ais523: yes, because it only comes up in code that will give different results depending on how the optimizer chooses to optimize it, and to some extent even to how the low-level numeric library chooses to optimize it 01:15:48 whereas, say, VPCLMULQDQ is an abbreviation of an eight-word prhase 01:15:48 well, s/only/mostly/ 01:16:27 (great instruction, though, I actually spent some time trying to find it recently because I needed it and it would have been a pain to implement without processor support) 01:17:46 wait... VPCLMULQDQ? is that a real instruction? how new is it? 01:18:23 ah, here it is, PCLMULQDQ 01:18:26 AVX (the original PCLMULQDQ was in a little feature set all of its own) 01:18:26 so it can't be that new 01:18:29 yeah 01:18:43 ok, this one I'm fine with not remembering the mnemonic 01:18:57 the https://stackexchange.com/search?q=what+is+MOVNTDQA is a little bit helpful but still no direct link to the definition 01:19:02 or anything specific about the instruction besides knowing there were a few carry-less multiply instructions 01:19:16 but yes, that can be useful 01:19:50 (and 0 results for "what is VPCLMULQDQ") 01:19:57 unlike the non-temporal loads, which I should have remembered (even if I don't know the mnemonic by heart) 01:20:35 b_jonas: I think there's only one carry-less multiply instruction (two if you count the V- version differently, but the only difference is top-half-unchanged versus zero-extension) 01:20:44 nakilon: carry-less multiply, like you need for polynomials over GF(2) 01:21:15 ais523: that is possible, though isn't there one for a specific GF(2^n) representation too? 01:21:23 looks like stackexchange failed to index own entry: https://i.imgur.com/ulTX078.png 01:21:36 yep: CRC2 01:21:40 no 01:21:42 CRC32 01:22:32 oh, and apparently the AES instructions 01:23:00 but you're probably right that PCLMULQDQ is the most general 01:23:22 oh, there's also GF2P8AFFINEINVQB 01:23:36 did you ask for eight words? 01:23:56 ah, stackexcahnge search can't automatically ignore the "what is" 01:23:58 and GF2P8MULB 01:23:59 wow 01:24:08 so many instructions 01:25:41 I've never heard of GF2P8AFFINEINVQB 01:25:44 where does that one come from? 01:25:56 same intel manual 01:26:05 I mean, which instruciton set? 01:26:10 let me check 01:26:13 I guess the universal searcher should have a huge list of dictionaries and automatically understand that "VPCLMULQDQ" should be searched in some "dictionary about CPUs" 01:26:47 "CPUID feature falg: GFNI"... I don't know then 01:26:51 it must be very new, my Intel manual from 2016 doesn't list any instructions starting with G 01:27:04 which is a bit surprising, really 01:27:23 there's also a third one that starts with GF2P8 01:27:55 nakilon: Wikipedia's search box seems to do a pretty good job of recognising CPU instructions 01:28:00 ais523: it's not surprising, most new instructions like VGATHERDPD go under V 01:28:02 while it can be understood from the 3-gram "MUL" that it's about CPU the GF2P8AFFINEINVQB is really a mess 01:28:11 there are several instructions under VG 01:28:14 although it can't find GF2P8AFFINEINVQB 01:28:31 b_jonas: Intel have moved on from V, they're onto E now I think 01:28:50 or, maybe not 01:28:53 ais523 indeed 01:28:54 maybe they still start with V? 01:29:32 including I haven't heard of this starting with E thing, but I admit I'm not following all the new instructions with a short lag 01:29:47 ah, GF2P8 means "GF(2⁸)" 01:30:05 like, I'm not familiar with all the AVX512 instructions 01:30:10 b_jonas: I think starting with E is actually the encoding summaries rather than the instruction names, thinking about it 01:30:25 ais523: yes, a particular representation of GF(2**8) 01:32:15 appears to be the representation that chooses α such that α⁸+α⁴+α³+α+1=0 01:32:30 and stores field elements as the coefficient of a polynomial in α 01:32:33 is that the usual one? I have a list somewhere 01:32:37 no idea 01:32:45 I don't even know if it's the IOCCC one 01:33:02 the problem is I can't remember where 01:33:07 well, it's in an intel instruction so it can't be a terribly unusual representation 01:33:21 presumably they wouldn't add something that nobody would use 01:33:39 CRC32 and AES certainly use existing ones 01:34:07 found it 01:34:10 I mean those instructions implement crypto primitives that were used already before they got x86 instructions 01:34:24 [2,8,[1,0,1,1,1,0,0,0,1]], 01:34:42 unfortunately I also can't remember what format this file is in, but that doesn't look like α⁸+α⁴+α³+α+1=0 01:36:09 if there is such a list, it might be somewhere in https://www.jjj.de/fxt/ 01:36:15 looks like that's ordered with the α⁰ coefficient at the start, and α⁸ a the end, so it's α⁸+α⁴+α³+α²+1=0, not quite the same 01:36:31 http://www.math.rwth-aachen.de/~Frank.Luebeck/data/ConwayPol/index.html 01:36:44 (luckily I'd recorded the place I got the file from next to the file itself) 01:39:22 wow the definition of "smallest polynomial" here is weird, it looks for the highest coefficient that differs, then decides which is larger or smaller based on the value of that coefficient xor whether the difference between its exponent and the polynomial's degree is odd or even 01:39:40 I assume there's a reason for that, because I can't imagine this would be a standard definition for no reason 01:40:59 https://www.jjj.de/mathdata/all-primpoly.txt lists all polynomials used to compute GF(2**n) for n<=11 01:41:53 the one you mention α⁸+α⁴+α³+α²+1 is apparently the first in the order used in that list 01:42:23 and α⁸+α⁴+α³+α+1=0, mentioned on the website you linked, doesn't give a finite field 01:42:23 that page is linked from https://www.jjj.de/mathdata/ 01:42:34 so probably there's a typo on either the website, or in the intel documentation it draws its information form 01:42:43 huh... 01:43:00 ais523: no, it's 8,4,3,2,0 01:43:31 α⁸+α⁴+α³+α+1=0 is only in the intel manual 01:44:13 b_jonas: sorry, by "the website you linked" I meant the older link 01:44:30 what odler link? 01:44:31 to the Intel instruction, not to the finite field polynomials 01:44:41 you didn't 01:44:42 I linked to a website with the intel instruction? 01:44:44 ah ok 01:44:49 I found it by a web search and assumed you'd linked it to me 01:45:13 https://www.felixcloutier.com/x86/gf2p8mulb 01:45:19 oh... 01:45:29 I assumed you'd just download a later intel or amd manual for it 01:45:32 it could possibly be a PDF extraction error? 01:45:43 I don't want to have to keep downloading manuals for this 01:45:46 I have two of them already 01:45:55 no, it's the intel manual that says x**8+x**4+x**3+x+1 01:45:55 admittedly the bandwidth doesn't cost much nowadays 01:46:02 um... 01:46:08 "I have two of them already" ... but 01:46:13 if they're obsolete 01:47:26 I'm assuming the old instructions don't change very much 01:47:43 so an old manual will still be good for anything other than newly invented instructions 01:48:02 https://xkcd.com/345/ "Hush, I'm coding. You ate yesterday." 01:48:14 I don't need to buy food, I already bought food twice 01:48:25 ais523: yes, the old instructions rarely change, 01:48:46 but if the newly invented one for the GF(2**8) interests you 01:49:12 I guess 01:49:13 anyway, the intel manual says the polynomial is "x**8 + x**4 + x**3 + x + 1" (the powers set with superscripts) 01:49:29 it's wrong, surely, that wasn't on the list of valid polynomials you linked 01:50:19 yeah, that's odd 01:50:59 speaking about figuring out which dictionaries to look up, I guess [tags] here do the job https://i.imgur.com/FfAxjZC.png 01:51:14 I don't have a recent AMD manual 01:51:44 but hm, only 12 results and tags don't intersect much so it would need to prebuild some tag topic clouds 01:51:47 ooh, Intel now has a combined PDF that does all the instructions in one PDF, that's a good enough reason to download a new one 01:52:26 three results for G... https://stackexchange.com/search?q=GF2P8AFFINEINVQB 01:52:50 ais523: yes, it has all in one PDF. it still has the separate volumes for printing, but who prints a full Intel manual? 01:53:03 that's weird that not all search results display tqgs 01:53:13 b_jonas: it was split last time I looked 01:53:16 I guess it's been five years 01:53:46 ais523: it exists as split pdfs too. and you still need to download the optimization manual in two separate pdfs (one general and one specific for your microarchitecture) besides the combined volume, and sometimes there's a supplement for planned future instructions 01:53:55 oddly, it still has separate chapters for A-L, M-U, V-Z 01:54:11 ais523: yes, it's literally "combined volumes" as in all "printed" volumes in one PDF 01:54:20 with the covers changed, but not much else 01:54:29 for optimization, I normally look at Agner Fog's manuals rather than the official ones, they're generally more accurate and also make it easier to work out what would run well on multiple architectures 01:54:43 ais523: sure, but I want to have both 01:55:06 I mean ideally I should have up to date AMD manuals too, but I don't 01:55:18 (I have rather old ones) 01:56:20 I tend to treat AMD's as the "official" ones as they invented x86_64 01:56:40 although, in practice, whatever Intel does tends to become standard because they have such market dominance 01:57:32 so two of 3 search results have the tag "assembly" and the top google result is the page with a nice list of them https://www.felixcloutier.com/x86/ 01:58:03 ais523: I think they're both official only for the CPUs that they each sell, but in practice those are very close and have almost no incompatibilities so you can use the other manual 01:58:30 there are two things you can do with a pointer: a) pointer arithmetic operations like offset, compare, etc.; and b) dereferencing it to get at the value you're pointing to – Of course, the latter is very much not specific to pointers. 01:59:25 Melvar: definitely; the problem is that many systems programming languages don't have anything efficient that does just b), if you want b) from a primitive then you get a) too 01:59:33 which rather hinders optimisation opportunities 02:00:58 Also the support for “peek but no poke” is limited apparently? 02:02:25 truly read-only pointers are rare, although most languages have something comparable but with weird edge cases 02:02:35 at the language level, that is 02:02:46 it's quite easy to get modern CPUs to not let a program write to particular areas of memory at all 02:05:02 wow rubygems have webhooks https://guides.rubygems.org/rubygems-org-api/#webhook-methods so I can make a \rasel remote executor to redeploy the function if I update the gem; that's not something to do often (or even ever) but it also makes me thinj of using github webhooks in a similar way to automatically update the IRC bot handlers 02:05:59 of course it can also pull the master HEAD every time but it would make the command work one second longer and the handler will break if something happens to the repo 02:06:04 . o O (Haskell has three different representations that correspond to pointers) 02:06:40 though the "something happends to the repo" isn't much more possible thing to happen than to "happen with the GCP Functions" since the github repo isn't billed 02:07:16 (Well, GHC.) 02:21:37 -!- oerjan has joined. 02:37:59 [[Talk:OISC]] M https://esolangs.org/w/index.php?diff=87921&oldid=86793 * VitalMixofNutrients * (+2804) I want to dispute the claim that FlipJump is the simplest OISC, by proving that Bit-Bit-Jump is actually the simplest and can evaluate conditional statements unlike FlipJump. 02:46:45 a few days ago I dreamed that I was joined on an IRC channel, and since waking, I'm wondering if that was trying to reference a specific real channel that I once joined, presumably on freenode, or if it was completely invented 02:47:56 what was the Nitter analogue for Instagram? 02:50:34 "simplest OISC" seems like an interesting argument to get into 02:52:01 although, the argument on that page doesn't seem to help much 02:54:05 [[ID machine]] N https://esolangs.org/w/index.php?oldid=87922 * B jonas * (+25) redirect because that's where I looked and the search results didn't help 02:54:06 Yeah. Turing-completeness is like an overly-full grilled sandwich; it doesn't matter whether it's panini or cubano, it is going to leak. 02:54:46 OISC systems have to have some essential complexity somewhere. If it's squeezed out of the instruction count, then it'll show up again in the instruction definition. 02:57:30 the discussion didn't mention TCness, so I guess an OISC with a nop instruction is the simplest 02:57:35 does 1.1 count as a OISC? 02:58:06 pretty much anything can be interpreted as an OISC if you try hard enough 02:58:22 I'm not sure it's something that can be objectively defined 02:58:32 heck, does slashes or Thue count as an OISC? 02:59:03 The Waterfall Model is arguably a ZISC (I actually found the ZISC formulation first, and it wasn't until I discovered the language a second time that I realised how easy it was to implement) 02:59:07 I think that OISC and ZISC are perspectives. 02:59:19 yes, that's a good way to put it 02:59:39 now I want sandwich ( 02:59:40 although, I do have the (possibly incorrect) view that an OISC/ZISC has to be imperative 03:00:24 the nondeterministic-as-in-declarative version of Thue, therefore, probably isn't (I still think this is the intended definition, as opposed to "replace a random substring") 03:00:30 It's fine for a ZISC machine to have computable (say, poly-time) small-step behavior, but be Turing-complete under iteration. That's how I think of The Waterfall Model, at least. 03:00:47 yes 03:00:47 there are languages where I don't even know how to count how many instructions they have 03:00:59 actually it's very common for TC languages to have simple small-step behaviour 03:01:45 are there languages without instructions? 03:01:51 oh, any 2-instruction no-argument language (e.g. Brainpocalypse or the I/D machine) can be made into an OISC by run-length encoding it 03:02:02 nakilon: that depends on your point of view 03:02:09 ais523: would you count a one-combinator basis of combinatorial calculus an OISC? and is it imperative? 03:02:45 b_jonas: only if it somehow worked without a precedence override, which is probably impossible (if you want to stay TC) 03:02:54 ais523: isn't that only if at most one of the two instructions have operand fields? 03:03:13 b_jonas: I guess, if you want to let the combined instruction take multiple arguments 03:03:31 ais523 people build everything with blocks, write music with notes, even that esolang where you put things on the table has things as instructions 03:03:47 I wonder if there is anything that can't be broken into discrete parts 03:04:05 nakilon: I have been looking for a language like that but failed to find one 03:04:18 I'm not sure what counts as instructions in Consumer society, even though it is imperative and programs have a source code 03:04:26 But Is It Art? is a good counterexample to a lot of statements about languages, it arguably doesn't have instructions 03:04:55 or the Post correspondence problem, that's like a 1D version of BIIA? 03:05:06 [wiki But Is It Art?] [wiki Consumer society] 03:05:07 but, they both still have composability in a sense 03:05:12 thread error 03:05:17 https://esolangs.org/wiki/But_Is_It_Art? 03:05:25 I'm not sure if Consumer Society has one 03:05:26 does Conway's Game of Life have instructions? 03:05:31 https://esolangs.org/wiki/But_Is_It_ArtF 03:05:34 https://esolangs.org/wiki/But_Is_It_Art%F 03:05:49 gah, what is up with the escaping here 03:05:51 https://esolangs.org/wiki/But_Is_It_Art%%F 03:05:54 Consumer Society doesn't have a wiki page because I haven't published its definition yet and I didn't want to create a completely useless stub 03:05:56 https://esolangs.org/wiki/But_Is_It_Art%3F 03:06:00 [wiki But Is It Art?] 03:06:01 https://esolangs.org/wiki/But%20Is%20It%20Art%3F 03:06:08 there we go 03:06:21 idk why it timed out on the first try 03:06:21 I needed to type two percent signs and two 3s for some reason 03:06:30 either google or wiki were cold I guess 03:06:30 `addquote Yeah. Turing-completeness is like an overly-full grilled sandwich; it doesn't matter whether it's panini or cubano, it is going to leak. 03:06:34 1334) Yeah. Turing-completeness is like an overly-full grilled sandwich; it doesn't matter whether it's panini or cubano, it is going to leak. 03:07:49 does C have instructions? 03:08:11 it has statements, those are a decent analogue for instructions 03:08:17 or Algol might be a better question 03:08:40 ais523: ok, if statements matter then how about just lambda calculus? 03:08:40 something like Diophantine equations are a good example of something that doesn't clearly have separate statements 03:09:10 b_jonas: I think the best way to think about lambda calculus imperatively is that apply is the statement 03:09:16 or the instruction 03:09:28 might be 03:09:36 heh https://esolangs.org/wiki/Matrioshka_language -- the "matrIOshka" is a word with Ё 03:09:46 this is very clear in unlambda, the only thing that actually does anything is the backquote 03:10:21 nakilon: I think "matrioshka" is an English word which was borrowed from Russian, but often those words change in the borrowing 03:10:56 I had matrioshka 03:11:06 when I was like 5 03:11:33 e.g. "babushka" is an English word by now but the vowels are all different compared to the Russian original 03:11:44 I have a small matrioshka 03:11:56 I certainly didn't have one when I was 5 03:12:13 (the English version of the pronunciation is inherently funny to say which is why it caught on, but would be annoying to use on a regular basis) 03:13:42 Wiktionary says it's pronounced bəˈbuːʃ.kə in English, the Russian version is ˈbabʊʂkə which is quite different 03:14:18 English is really weird sometimes 03:14:21 yeah, I heard that the accent is in different place 03:15:13 My favorite example of English inability to pronounce things right is French "marche" vs. English "mush". 03:15:25 By the way, the Russian system of indicating the stressed syllable with a mark *over* the vowel rather than next to it is so much better. 03:15:25 Words often change in borrowing between different language 03:16:53 Sometimes even in borrowing within the same language. 03:17:55 Yes, even same language too sometimes 03:20:21 IIRC there are some cases where English borrowed the same word twice, with two different meanings, but can't think of any offhand 03:21:36 in modern Russian the lack of a culture of education (in Soviet time it was cool to know things, read a lot, etc.) and internet with all its memes and hypes makes young people often learn anglicisms instead of using the Russian word that always existed but teachers didn't bother to teach kid it 03:22:05 ais523: I'm not sure if "proof" vs "probe" counts 03:22:14 for what you're looking for 03:22:51 huh, that's interesting, "prove" once used to mean "test" (i.e. an effective synonym of "probe") but the meaning changed over time 03:23:10 shachaf: that's also the Greek and Spanish system, but unlike the russians those actually use it 03:23:10 so maybe we borrowed it twice, with the same meaning, but the meaning diverged in between? 03:23:34 ais523: I think it's actually borrowed as "proof" and "probe", and "prove" was derived from "proof" in English, but I'm not sure 03:24:12 as for meaning change, that's why "proof of the pudding" makes no sense 03:24:36 looks like "prove" in French was the borrowed word 03:25:03 for "prove"/"proof" 03:25:08 for example, people installing and using software with a messaging functionality and lacking the Russian localisation learn the word "message" and don't use the word "сообщение" 03:25:13 whereas "probe" was the same word but borrowed from Latin 03:25:22 they say месседж or мэсседж 03:26:43 in 5 or 6 years time, probably the loanword will be a real word with the meaning of "a message sent over the Internet in particular", the way these things normally go 03:27:17 ais523: "governor" and "cybernetics" are two borrowings of https://en.wiktionary.org/wiki/%CE%BA%CF%85%CE%B2%CE%B5%CF%81%CE%BD%CE%AE%CF%84%CE%B7%CF%82#Ancient_Greek 03:27:18 ais523: isn't that "email"? 03:27:26 You may recognize a third recent borrowing, "kubernetes" 03:27:34 Email is not the only message send over internet 03:27:39 b_jonas: "email" is more specific 03:29:07 nakilon: i suspect internet slang is a mess of borrowings in all languages other than english, not just russian. 03:29:08 Corbin: that's amazing, that there's such a difference in meaning 03:29:25 (and english is a mess too) 03:29:26 oerjan: in English, too 03:29:44 e.g. "kek" is a borrowing from an invented language made for World of Warcraft… 03:29:53 what? are you sure? 03:30:25 b_jonas: is that directed at me? yes, this one's pretty well documented 03:30:26 I thought "kek" was an alternate spelling for an onomatopoeia that may or may not have been borrowed form jaapenese 03:30:33 ais523: yes, about "kek" 03:30:52 it's "lol" passed through a character filter designed to prevent the players of the two opposing factions understanding each other 03:31:01 no, apparently from korean 03:31:11 this makes just as much sense as most Internet slang… 03:31:29 although, this is disputed, the Korean borrowing is also mentioned 03:32:07 I mean it's an obvious onomatopoeia, it can appear in multiple languages and be impossible to figure out where it's copied from 03:32:13 oerjan I won't mind it if internet messaging was something very new thing but those were "сообщения" for many years, until the internet got to the youngest people who lack the vocabulary 03:32:45 Wiktionary (which is not a reliable source for this sort of dispute) says that the World of Warcraft thing was *intentionally* added by Blizzard to perpetuate a Korean Starcraft meme, which i think is more or less impossible 03:32:55 heck, in general I don't understand how linguists can so often give such certain statements about etimology when there's more than one possibility. 03:33:41 Sometimes the statements aren't so certain 03:33:42 fwiw, this may be a weird case in which the supposed etymology is the reason the word is used, even if it isn't correct… 03:34:27 ahah, it says in French it's Французский Кек — Queque 03:34:32 "quiz" is a good one; there's a widespread belief that the word was invented for a bet, but apparently there's no evidence about this 03:35:06 b_jonas: Until relatively recently, words had to arise in geographic locations. We know e.g. where "marche" became "mush" because we know where French and English occupied territory during the start of dog-sledding in North America. (And TBH I think that "like, Alaska" is the best answer we currently have?) 03:35:23 heh, Russian for France is almost identical to the English (and of course to the French), but the spelling is so different it's hard to recognise 03:38:04 looking up "the proof of the pudding", apparently it's mutated into "the proof is in the pudding" in some areas 03:38:12 which makes even less sense 03:38:18 russian article http://wikireality.ru/wiki/%D0%9A%D0%B5%D0%BA says kekeke originated in starcraft as an automated transliteration from horean hehehe but I agree it's not clear that kek is the same as kekeke 03:38:56 *korean 03:38:58 _<> 03:39:32 so the debate is primarily about which Blizzard-created automatic character filter is responsible? 03:39:51 lol 03:41:31 hmm, "pwn" is another good one, especially as it likely had no defined pronunciation for a while 03:44:28 oerjan: Does Russian not use it? 03:44:36 yeah, that's one that appeared in written chat like "glod" and "teh urn" 03:44:48 but those have obvious pronunciations 03:44:49 Or do you mean that Greek or Spanish use it all the time, and not just when indicating how to pronounce a word? I haven't seen that. 03:44:52 shachaf: only in dictionaries and language textbooks for learners afaiu 03:44:59 yes 03:45:05 use what? 03:45:20 nakilon: ´ above a vowel to indicate stress 03:45:25 ah 03:45:26 Is there any language where the spelling indicates where the stress goes all the time, and not just in special cases? 03:45:29 Finnish, I suppose. 03:45:36 Sort of. 03:46:07 actually in books the thing is printed right above the letter 03:46:14 huh, neither Wiktionary nor Urban Dictionary has "teh urn", but it was definitely widespread in the speedrunning community for a while 03:46:16 greek uses it in all multisyllable words, while spanish has a default stress rule that allows leaving it out in many words, but it's mandatory for all others 03:46:16 Most languages already use very redundant spelling compared to e.g. Hebrew. 03:46:23 I think it might be more of a meme than an actual word, though 03:46:24 I guess it's just an internet thing that people have to put the ' somewhere 03:46:37 ais523: do they have "urn" and "teh" separately? 03:46:41 we don't have a functionality to put the thing directly above an arbitrary letter at least in russian layout 03:46:54 oh wait 03:47:05 b_jonas: well, "urn" is a real but unrelated word, and I'd expect "teh" to be there because it's older 03:47:12 I know unicode doesn't have precomposed characters for russian vowels with an acute accent, 03:47:13 if you mean the rules of transcription in nelgih dictionaries -- yet, it's older than internet 03:47:18 -!- earendel has joined. 03:47:27 *english 03:47:28 _OO 03:47:44 the September is cold, my fingers to typos 03:47:49 *do 03:47:57 while it does have precomposed characters for vowels with acute or grave (both acute and grave are used to mark stress, but in different European languages, by the way), because all of them clearly exist in at least some language like Welsh 03:48:48 huh, urban dictionary has "urn" in an entirely different sense 03:48:58 a sense I never heard of 03:49:07 knowyourmeme doesn't have "teh urn" either (although it appears in one of the references) 03:49:16 b_jonas: that doesn't surprise me, lots of slang is regional 03:49:16 fun 03:49:36 ais523: sure, not the part where I don't know the slang 03:49:46 I don't know most of the entires in urbandictionary 03:50:05 just that there is a third sense that happens to collide with that existing word 03:50:15 with apparently three different etimologies 03:50:35 "Язы́к программи́рования" -- here is copypasta from wikipedia; in the article the tick is right above the letter but while I'm typing this message I see it's after the letter 03:50:36 I think "teh urn" is probably best considered to be a Twitch meme, which was fairly long-running for Twitch memes but short-lived in terms of the language generally 03:50:50 heh, when I've pressed Enter it's now rendering above the letter again 03:50:56 nakilon: the russian and greek ´ are unicode modifier characters (not sure if they're the same character), so you _can_ put it anywhere in unicode as long as you have a way of typing it 03:51:30 text entry boxes often treat combining characters in dubious ways 03:51:35 -!- Sgeo_ has joined. 03:51:45 I'm not sure if there is a non-dubious way to treat them for editing purposes (as opposed to display purposes) 03:51:54 when I'm editing this copypasta in the text input I can't select it and move elsewhere, it's like bound to the vowel already 03:52:10 As far as I know, mostly it is the Romantic languages that have acute and grave accents, and other languages work differently. Is that right? 03:54:04 -!- Sgeo has quit (Ping timeout: 252 seconds). 03:54:48 zzo38: I think that mostly depends on where the language got its alphabet from 03:54:52 b_jonas: btw italian uses _both_ acute and grave to mark stress on the last syllable (and in some dictionaries, elsewhere) with a close/open distinction of pronunciation when the vowel is e (or o, except then it's always ò at the end), but only there unless it's in a dictionary. 03:55:37 in theory, a language could have multiple different writing systems, but that doesn't seem to happen that often in practice 03:56:47 Man, consensus is just the best. How come hardly anyone's into it? 03:57:08 ais523: it does happen, just usually not simultaneously. serbian might be the only one that can keep two for decades. 03:57:17 ah no 03:57:24 obviously Norwegian will keep them longer 03:57:26 Well, other languages even with Latin alphabets work differently as far as I know, at least Germanic languages that use Latin alphabets, as far as I have seen they are differently, but I don't really know all of these thing if it is. I know that English writing does not normally use the accent marks, at least. 03:57:59 there are much more that had different writing systems with different scripts at different times 03:58:00 technology seems to drive elimination of letters from English 03:58:09 þ started dying out when printing came about 03:58:30 and it was typewriters and then computers that have mostly driven out the diaresis 03:58:52 oh, as for the diarrhea, I wanted to ask 03:59:04 what 03:59:07 how do you spell Bo-otes the constellation in English ais523 03:59:12 (I'm just about old enough to remember the time when diareses were seen frequently enough to not look odd) 03:59:28 b_jonas: I don't know the constellation in question 03:59:39 ``` \? diarrhea # nakilon 03:59:40 Diarrhea is the most sickening accent, although some others are more grave. 03:59:50 I know that one; it's spelled "bootes" and you're just supposed to know that it's boötes. 04:00:21 Wikipedia suggests https://en.wikipedia.org/wiki/Bo%C3%B6tes 04:00:35 so it's using the diaresis for its intended purpose 04:01:04 ais523: yes, so I was wondering if you use the diar... trema there 04:01:31 I use the diaresis everywhere I can get away with it, which is very few places 04:01:40 nowadays, few people know it exists 04:01:54 a lot can change in a few decades 04:02:07 the context is https://logs.esolangs.org/freenode-esoteric/2021-05.html#lqp 04:02:37 ais523: yes, but isn't it one of those things that you can still use if most people don't know it because they can pick it up by example? 04:02:52 Yes, I have heard that a thorn is not used in English due to printing, but I think some languages still use 04:03:00 there's a difference between what's understandable and what's socially acceptable 04:03:24 people will laugh at you for writing "coöperative", even though it's normal-ish in old books 04:03:56 b_jonas: nynorsk vs. bokmål aren't just about spelling, there are also different word choices at least traditionally. and even phrasing: nynorsk frowns more upon using convoluted syntax with verbs being nouned (but _both_ frown upon it compared to german or even english) 04:04:00 Wiktionary calls it a "rare spelling" 04:04:50 lol HTML https://web.archive.org/web/20120204065251/http://people.ku.edu/~nkinners/LangList/Extras/famous.htm 04:04:58 oerjan: ok. and it's not clear if serbian is a good example, or if in the future we'll just see it as a short period when two systems coexisted 04:05:28 (I do hope it's the serbian latin that dies out by the way) 04:05:48 (but one of them will die out for sure) 04:06:45 b_jonas: apparently they've survived in parallel since 1830 04:06:54 what? really that old? 04:06:58 I thought it was much newer 04:06:58 which is longer than I expected 04:07:32 this may end up in a situation like Japanese, where hiragana and katakana are used for different purposes (katakana's almost like italics) 04:07:51 but have a 1-to-1 correspondence 04:09:03 I don't really see how it could end up that way 04:09:30 well maybe 04:10:00 I can more easily imagine them being used in different kinds of text, but not generally mixed in one book 04:10:29 but the problem is 04:11:01 both latin and cyrillic already has lower case, upper case, and italic forms, so you don't need an extra doubling to use different letters for different occasions 04:13:59 you can have three cases, but six is entirely too many 04:15:42 s/cases/genders/ * runs hastily away from both woke and bantu people 04:17:04 https://en.wikipedia.org/wiki/Plankalk%C3%BCl 04:17:11 so didn't serbian cyrillic only get popular in the 1980s or 1990s, even if it was invented earlier? 04:17:14 > While working on his doctoral dissertation, Zuse developed the first known formal system of algorithm notation[7] capable of handling branches and loops.[8][9] In 1942 he began writing a chess program in Plankalkül 04:17:21 sorry sorry 04:17:23 I mean 04:17:26 so didn't serbian latin only get popular in the 1980s or 1990s, even if it was invented earlier? 04:17:47 serbian cyrillic was popular before that obviously 04:18:08 dude didn't even yet have a programming language but started coding chess -- how monay today's "programmers" would try to code chess at least once in their life? 04:18:19 *how many 04:18:57 b_jonas: for a time until about 1990 a lot of people tried hard to claim serbian and croatian were the same language hth 04:20:49 (based on politics and the silly argument that there was hardly any difference to speak of) 04:21:19 (then the politics changed and they quickly started making sure there _was_ a difference) 04:21:54 (but i think they're still closer than say bokmål and nynorsk?) 04:22:12 oerjan: yep 04:22:38 so maybe serbian latin was popular before the 50s, that's just too old for me to have noticed? 04:22:52 I don't see books that old often 04:23:09 or, you know, posts on the internet that old 04:23:18 zzo38: Icelandic still has thorn 04:23:27 shocking 04:24:09 I think Icelandic's use of þ is the reason it has a default keybinding on this layout 04:24:17 and eth (ð) which is like the voiced version i think 04:25:38 voiced th bothers me so much, because my brain tries hard to refuse to hear it as distinct from the unvoiced version 04:25:42 nakilon: istr ada lovelace similarly coded tic-tac-toe 04:26:06 which is a bit easier 04:26:11 it took me a while to figure out whether the "th" in "thorn" is voiced or unvoiced, I had to say it over and over again and compare with reference words 04:26:16 and the sounds aren't even that close 04:26:54 ah, so IOCCC had a chess engine before 2005/toledo: 1992/vern. I thought it only had toledo's chess engine, his X11 chess program, and suicide chess. 04:27:02 I've been wondering if there's a way to spell either of the sounds to make their pronuncuation unambiguous to a typical English speaker 04:27:12 oerjan funny but it's just exactly 1-2 days ago that I during having fun on lichess started thinking about making chess and then switched to idea to start with tic-tac-toe to avoid spending time on coding the rules -- this is why yesterday I threw some links about Gomoku 04:27:31 I guess "vh" for voiced "th" isn't massively far off 04:27:41 (because the tic-tac-toe would be too fast to calculate fully) 04:27:41 and "fh" for unvoiced? 04:28:09 it helps that "v" is the voiced version of "f" 04:28:35 (oerjan-style thought bubble: does that mean that the opposite of "voiced" should be "foiced"?) 04:28:51 ais523: oh, so that's how you're supposed to pronounce fhtagn? 04:29:04 or even "foist", I guess 04:29:16 voiced th bothers me so much, because my brain tries hard to refuse to hear it as distinct from the unvoiced version <-- huh are there no minimal pairs in english? i don't know a rule to know which one is correct where 04:29:22 and also I wanted to ask you guys if there is something between Tic-tac-toe and Nim that would be easy to implement the rules machine and yet hard to calculate, but then I discovered the https://en.wikipedia.org/wiki/M,n,k-game that allows me to just take different n,m,k 04:29:40 oerjan: "the"/"this" is not a pair but make for good reference words 04:29:52 now you just have to explain all the other consonant combinations that appear at the start of a word only in incantations to summon Cthulhu 04:30:07 I guess "this" (voiced) / "thistle" (unvoiced) have a paired syllable 04:30:31 ais523: on the other hand as a norwegian i have similar problems with voiced and unvoiced s (the latter doesn't exist in norwegian, which has no voiced sibilants) 04:30:32 (the second "t" in "thistle" is silent, fortunately, or it wouldn't work) 04:31:34 I don't know a rule for which th to use either, but I assume one exists, because I've never had trouble pronouncing an unknown English word with a th in it 04:32:00 ais523: it seems to me that dh would be a reasonable spelling of the voiced version 04:32:15 actually I've implemented the Gomoku in around 2007 in C++Builder -- it was thinking for few seconds and wasn't easy to beat, at least for me; the whole 3 (or 4?) deep loop was hardcoded with no recursion 04:32:23 ais523: what? there's no way the "t" is always silent. isn't it's just one of those "t"s that are sometimes silent, like the one in "often"? 04:32:24 the problem is that dh is a real digraph which has its own pronunciation 04:32:51 Wiktionary says always silent 04:33:00 or maybe earlier than 2007 because it feels like I didn't have internet yet to know for sure how ti should be done 04:33:07 and pronouncing it would be really weird, it'd end up rhyming with "pistol" 04:33:38 so does my Longman. funny. 04:33:45 in "often" the 't' is sometimes pronounced but it's rare 04:33:56 . o O ( it's not oerjan-style without the prefix hth ) 04:34:38 https://en.wikipedia.org/wiki/Pronunciation_of_English_%E2%9F%A8th%E2%9F%A9 seems useful 04:35:10 lol I remember how in school every teacher of English was reteaching us the pronounciation 04:35:16 ais523: um "the"/"this" are both voiced unless i am far more gravely mistaken about them than i thought 04:35:16 they all said "your previous teacher is dumb" 04:35:50 oerjan: I was about to say that 04:35:57 I /just/ realised that "the" is voiced 04:36:05 even though I was sure it was unvoiced earlier 04:36:14 I came here to say that, and then noticed your ping 04:36:26 this is how hard it is for an English speaker to tell them apart 04:36:37 "this"/"thistle" helped, though 04:36:51 we were officially taught british english and when there was a teacher of english literature that "lived in USA for several years" we could not understand her at all 04:37:48 (One thing that I do think can be good to continue using thorn letter in English is when you want to abbreviate "Thursday" as one letter, so that is difference from "Tuesday". And then, write "L" for "Lyeday" (as another name for Saturday); I have seen suggestion Lyeday for Saturday too (and it look like it is another name for that day in Proto-Germanic), and I like this because it is not "S" like "Sunday") 04:37:50 huh 04:37:57 American English and British English are mutually intelligible but the differences are actually quite large 04:38:27 I had also heard someone who speak English could understand well enough in most countries (even those who are not English) except in England 04:38:35 and I could see how it would be difficult as a second language 04:39:34 I mean like those youtube videos of "speaking with scottish accent" or something -- the same much inunderstandable 04:39:48 zzo38: shouldn't we just abbreviate them as the alchemical symbols for the Moon, Mars, Mercury, Jupiter, Venus, Saturn, Sun? 04:39:53 the days of the week that is 04:39:55 ah, here we go, the rules for telling the "th"s apart: https://en.wikipedia.org/wiki/Pronunciation_of_English_%E2%9F%A8th%E2%9F%A9#Phonology_and_distribution 04:40:08 she was dictating things and we were just pretending we understand, but mostly asked each other "what did she just say?" ..D 04:40:17 or W-1, W-2, W-3, W-4, W-5, W-6, W-7 if you prefer 04:40:30 nakilon: a sufficiently strong accent can be hard to understand even for native speakers 04:41:21 Yes, abbreviating them as the the symbols for the planets is another way, maybe is better 04:41:40 heh, I love the way that Wikipedia points out that "lighthouse" is an exception, the t and h are in different syllables 04:42:14 I remember how I asked: "road or wrote?" and in Russian it actually sounds like "в рот" meaning "into a mouth" -- the class went laughing and she asked me to leave and after that or another lesson she just refused to do lessons with me, lol 04:42:36 I don't remember how I was rated in the end of the year 04:43:33 probably now I could understand from the context if it's road or wrote but not when you are 16 or so 04:45:41 nakilon: write/right/rite/wright is one of the worse homophones in English 04:45:51 "cloth"/"clothe" aren't a pair because the vowel is different, but I can pronounce "clothe" with either th easily 04:46:13 ais523: can you pronounce it with either th as either a noun or verb? 04:46:15 (I think in alchemy they are just used as the sign for different metals and chemical elements; in astrology/astronomy they are used to represent the planets, Sun, and Moon.) 04:46:48 b_jonas: the verb can only use the voiced version, if I pronounce it unvoiced it sounds like a nonexistent noun (an irregular singular of "clothes") 04:46:54 zzo38: yeah, astrological symbols might make more sense 04:47:02 which is weird because the th in "clothes" is voiced too 04:47:30 how do you pronounce "clothing"? 04:47:53 `? hth 04:47:55 hth ([ʰtʰh̩]) is help received from a hairy toe. It is not at all hambiguitous. 04:48:19 b_jonas: voiced 04:48:34 I'm getting better at telling them apart but it takes so much concentration 04:48:47 in the first class we were taught to spell "can't" with "a", and in the second class we've got another teacher and she said "omg, don't say like that", you should use "e" sound otherwise it sounds bad -- only after school I've learned what word she meant 04:48:49 huh... now I'll have to look that up in the Longman too 04:48:57 I guess that's another pair – the second syllable of "clothing", against "thing" 04:49:16 but then in movies I hear "can't" exactly like we were taught in the first class so we were not fully wrong, it was just an accent 04:49:43 longman says "clothing" is voiced too 04:50:10 apparently "th" in the middle of a word is nearly always voiced, and "th" at the start of a word is nearly always unvoiced – Wikipedia claims that there are exactly 14 base words whose derivatives start with voiced "th", and all other words are unvoiced 04:50:25 that would explain why I rarely have much trouble getting it correct 04:50:48 "th" at the end of the word varies by both the word and by the accent of the speaker, according to Wikipedia, and when I think about it I think that's right 04:50:58 heh, TIL this longman thing https://www.ldoceonline.com/dictionary/can-t what does red and blue mean? they are exactly what you meant 04:51:01 *what I 04:51:15 basically, the less voiced your final ths are, the further north you live 04:51:19 if you're in the UK 04:51:35 -!- Everything has quit (Quit: leaving). 04:52:07 nakilon: no 04:52:37 my longman is a printed dictionary from the same publisher (Langenscheidt-Longman) 04:52:49 called "Dictionary of Contemporary English" 04:53:08 though the definition in that page is confusing 04:53:23 " something is impossible or unlikely" is exactly the "Sorry, I can’t help you." 04:53:38 huh, I missed definition 2 there 04:53:47 a very good dictionary in the sense that its definitions are easier to understand than the ones in Oxfords, and much easier than the ones in Websters 04:53:51 if you write it as "cannot" it sounds more like an order, rather than a state of fact 04:54:42 I hadn't noticed that rule before, and was vaguely surprised that a dictionary picked up on it 04:55:01 however, this seems to be a meaning that's missing from "cannot" rather than an extra meaning of "can't" 04:55:04 b_jonas can’t | meaning of can’t in Longman Dictionary of Contemporary English | LDOCE 04:55:04 I highly recommend using Longmans as the first one-language English dictionary for foreign language learners 04:55:39 nakilon: ok, but this one doesn't have red and blue stuff, except the blue L on the cover, and definitely doesn't have sound recordings 04:56:01 ..D 04:56:23 so I don't know what red and blue means 04:56:28 yout typography had only black ink 04:57:17 oh when you hover the mouse it says red british blue american 04:57:50 haha, so the teacher from 2 to 9 classes was wrong 04:59:05 now I'm trying to imagine what American English "can't" sounds like (without cheating by playing a recording) 04:59:44 oh, Wiktionary says /kænt/, and æ is easy, that'd basically be British English "cant" 05:00:17 I guess the transcription on that page means the same: /kɑːnt $ kænt/ 05:00:23 it's just not colored 05:00:36 …it crosses my mind that I've partially learnt quite a few alphabets 05:01:15 I know more than half of Cyrillic but not all of it, I struggle with many of the vowels, and also the consonants that are written as digraphs in English (I have trouble remembering which is which) 05:02:07 and I know some of IPA but not all of it (that's especially hard for sounds that aren't in English, although I can normally get there by reading the Wikipedia article and following its instructions about where to put the various parts of my mouth to pronounce them) 05:04:25 hmm, I wonder whether the "th" in words like "lighthouse" should be technically written as "tḧ"? although the diaresis is only supposed to be used on vowels, I guess now that it isn't used anywhere any more I can put it where I like 05:04:52 brilliant 05:04:54 b_jonas does your book longman have one or two transcriptions? 05:07:58 oerjan: anyway, now that I've read the rules, I can understand why þ/ð pairs are so hard to find in English – the rules pretty much eliminate them 05:08:09 ahaha, another example https://www.ldoceonline.com/dictionary/homework 05:08:23 you would probably need to find a loanword with a th in the middle, that happened to otherwise be the same as a native word 05:08:40 in 1st class we were taught to say hAm, and the next teacher said it's stupid, say hOm 05:08:47 oʊ? wow 05:08:50 "because otherwise it sounds like harm work" 05:09:16 American English sounds ridiculous to Brits sometimes 05:09:45 this is so weird to finally know why other teachers said we've got not the best teacher during 2-9 05:10:13 I don't think I could teach American English 05:10:18 she was kind of right to say cEn't and hAm but not while saying that "we are learning british one" at the same time 05:10:26 the vowels are so different 05:11:49 I think this was all wrong to decide that "we teach this one" because in the end today I have no idea which one is which 05:12:05 is it ...tize os ...tise, color or colour beughbor or neighbour, etc. 05:12:45 it would be nice if instead of "we learn british, that's it" we were "learning both, here remember the difference..." 05:12:51 …ise vs. …ize, nobody has any idea any more 05:13:01 so we would pay attention 05:13:04 both are used interchangeably in both Britain and the US any more, probably because they're pronounced the same way 05:13:12 s/any more/nowadays/ 05:13:22 "beughbor" oh god today's typos 05:13:24 some people try to insist on a rule to distinguish them but nobody can remember which is which 05:13:44 and I have spellcheckers on my computer with strong opinions about -ise versus -ize but they disagree with each other 05:13:58 ahah, I always ignore those red lines 05:14:03 *underlines 05:14:04 however, in most of the cases where the words are a different length, the British spelling has more letters than the American 05:14:27 so "neighbor", "color" are US, "neighbour", "colour" are UK 05:14:49 also I guess it was less practical to learn british one 05:14:52 despite being British, I will often use the American spellings in technical contexts, sometimes intentionally, sometimes not 05:15:02 they could not predict that though, we are the Europe 05:15:03 American is more widely used, I think, because they outnumber us 05:23:07 do we? 05:24:36 by a factor of about 5, it seems 05:25:28 damn, sorry. 05:25:30 we'll try harder. 05:28:40 I guess I look down on anyone teaching English as a second language who doesn't realise that British and American English both exist, though (and it's confusing to change from one to the other mid-course) 05:33:06 oerjan: OK, I finally found a þ|ð pair: "loath" / "loathe" 05:33:33 although it's not great because "loath" is pretty obscure as words go 05:34:23 oddly, Google Ngrams has "loath" as more common than "loathe", but derived forms of "loathe" (such as "loathing") beat both 05:34:39 * ais523 quickly looks up how to pronouce "Lothian" 05:34:53 ð it seems 05:36:11 the pair only works because the "e" is supposed to lengthen the preceding vowel but it's long anyway, so it has no effect other than moving the "th" away from the end of the word 05:36:28 the funny thing is, to an English speaker's ear, "loathe" sounds like it has a longer vowel than "loath"! 05:37:36 (it doesn't, it's just perceived like it does) 05:38:46 -!- delta23 has quit (Ping timeout: 256 seconds). 05:41:24 …it strikes me that English is arguably an esolang 05:41:52 maybe that's why we discuss it so much here 05:42:16 ais523 we were taught that there are two englishes but we didn't bother to learn both 05:42:32 I guess the books were specializing on one of them 05:42:54 and teachers could believe or speculate which pronounciation is correct 05:43:27 there's more than two, but, e.g., Indian and Australian English aren't heard much outside their native countries 05:44:11 we had only one tape recorder in school to bring fro class to class to play some english recordings on rare occasions, usually a special kind of exams -- "an audition" 05:44:15 There are more than two kind of English; there is also Canadian English too. One thing on the CBC they said, should you use American or British spelling in Canada? I say, you use Canadian spellings in Canada (except computer commands, which will be American). 05:44:32 yeah I know australian english from youtube 05:45:11 zzo38: I think Canadian English is a good compromise between British and American 05:45:21 most of people at least in my age didn't even have the full school course of english, usually only 6 years, not 10, or even german instead 05:45:53 so people just didn't give a damn, and again there was no internet, only tape recordings 05:47:59 I guess the Internet would be one of the main reasons to learn English 05:48:40 it's the main use I've made of my foreign language knowledge – I rarely have a reason to use languages other than English when communicating with other UK residents, and rarely go abroad, but the Internet makes it easy to communicate with people all over the world 05:48:49 Yes, I also think the Canadian English is good compromise between British and American, too. 05:48:55 "reasons to learn English" -- meh, most of people won't believe in that even today 05:50:01 there is a huge imaginary world in which Russians are living where "we don't need anything from outside, it's enough in here" while english memes are leaking but learning things from the source is considered a wrong and shameful 05:50:30 a lot of americans have that view too. 05:51:19 lots of Brits as well 05:51:20 I worked in many companies and in none of them had even one coworker who would speak with anyone abroad at all; people have learned English only to understand posts on stackoverflow 05:52:38 I'm considerably better at understanding foreign languages than typing/speaking them; I wonder if the easiest way to speak with people when there's no fluent common language is for everyone to speak their own language 05:52:47 I think that there are reasonable reasons today to learn English writing even if not English speech 05:53:00 that said, I hardly know any Russian, apart from the occasional loanword 05:53:59 French and German come up much more often, probably just based on geographical proximity 05:54:19 and Dutch for some reason, but if you know both English and German, you can often guess at what a Dutch sentence means 05:54:55 the funny thing is, at school, I chose Latin as my foreign language primarily because it was taught a lot better than French and German were 05:55:29 maybe that's a good reason 05:56:13 we had a lecturer of programming who was showing slides with just pseudolanguage that maybe even was his own one 05:56:54 but at least he taught us programming and then we could learn the lagnauges specifically 05:57:11 pseudocode doesn't normally follow actual rules 05:57:18 it's just "whatever I expect the audience to understand" 05:58:13 I guess it's the natural-language version of programming, you can be a lot more flexible because you don't need to let a computer understand what you're saying, just a human, and humans can fill in missing parts more easily 06:27:58 I don't want to change rasel specification, I don't want to bloat it, and I don't want to make too many derivatives of the same thing; but I still keep thinking about threading and other 2d-related additions, can't decide where to put them; maybe later 06:29:52 maybe I need a language with support of adding powerful extensions 06:30:27 not that I saw anything like that 06:31:04 to be able to have such extensions that would change the runtime so much like adding the subroutines I imagines few hours ago 06:31:50 *imagined 06:37:29 Funge-98 has some pretty powerful extensions 06:37:53 I think that's the only example of something like that I've seen in an esolang (not counting esolangs where the entire language can be redefined at runtime, as that isn't really the same thing) 06:40:10 -!- src has quit (Ping timeout: 240 seconds). 07:36:05 -!- earendel has quit. 07:36:20 -!- earendel has joined. 07:56:40 -!- ais523 has quit (Quit: quit). 08:06:36 -!- hendursa1 has joined. 08:10:21 -!- hendursaga has quit (Ping timeout: 276 seconds). 08:22:17 -!- earendel has quit. 08:22:38 -!- earendel has joined. 08:49:12 -!- Sgeo_ has quit (Read error: Connection reset by peer). 09:22:58 -!- Koen_ has joined. 09:25:07 -!- Lord_of_Life has quit (Quit: Laa shay'a waqi'un moutlaq bale kouloun moumkine). 09:51:34 -!- oerjan has quit (Quit: Later). 10:17:31 -!- Robdgreat_ has joined. 10:18:26 -!- Robdgreat has quit (Remote host closed the connection). 10:19:32 -!- Robdgreat_ has changed hostmask to ~rob@user/robdgreat. 10:19:38 -!- Robdgreat_ has changed nick to Robdgreat. 10:53:52 -!- arseniiv has joined. 11:03:02 -!- imode has quit (Ping timeout: 245 seconds). 11:12:06 ais523: yes, American "can't" is pronounced as /kænt/ which means that before a verb that starts with a "t" or "d" it sounds exactly the same as "can". some say the American solution to this is that "can" is always pronounce weak with a schwa, but I find that sort of hard to believe. 11:13:49 ais523: re `whether […] "lighthouse" should be technically written as "tḧ"?' => no, just use a hyphen, as in "light-house" if you think without it it's hard to read 11:13:59 though I think in the case of lighthouse there's not much need for it 11:14:14 maybe for hot-headed or pot-hole 11:15:58 "…ise vs. …ize, nobody has any idea any more" I always try to use ize. 11:16:37 "I have spellcheckers on my computer with strong opinions" => many spellcheckers have strong opinions about words where there are two variant spellings/pronunciations used. 11:18:51 like try to look at any Hungarian spellchecker, if you take any word where different people use different form of the conjugation or declination for the same thing, it's very likely that the spellchecker only accepts one of them, except when the alternative coincides with what it thinks is either a different form of the word (can happen for a few verb forms) or used with a different meaning of the same 11:18:57 root word in the same form (usually for nouns) 11:20:19 as for color, neighbor, and all the other -ours that don't come up in mathematics that much (honor, flavor, odor, favor, valor, candor etc), I now try to consistently use the -or spelling, but sometimes I still typo into the -our version that I used to be using 11:21:30 `ais523: oerjan: OK, I finally found a þ|ð pair: "loath" / "loathe"' => wait, what was the problem with cloth/clothe and bath/bathe? 11:21:32 ais523:? No such file or directory 11:22:02 (the latter possibly only in some dialects) 11:22:28 -!- Koen_ has quit (Remote host closed the connection). 11:23:04 ` b_jonas does your book longman have one or two transcriptions?' => usually one; two if the most common American pronunciation can't be derived from the british pronunciation that they list 11:23:05 ? No such file or directory 11:24:44 really 11:24:54 no such file or directory 11:25:18 I had to name my language nakilon 11:25:30 ` I wonder if the easiest way to speak with people when there's no fluent common language is for everyone to speak their own language' => it can occasionally happen, but rare, mostly because switching languages quickly can be mentally taxing too 11:25:31 ? No such file or directory 12:25:04 -!- Koen_ has joined. 12:26:11 -!- earendel has quit (Quit: Connection closed for inactivity). 12:42:46 -!- Lord_of_Life has joined. 12:57:02 -!- hanif has joined. 12:58:18 -!- hendursa1 has quit (Quit: hendursa1). 12:58:46 -!- hendursaga has joined. 13:01:30 -!- src has joined. 13:45:58 -!- mla has joined. 13:49:09 please upvote https://news.ycombinator.com/item?id=28423029 if you like chess and Haskell :-) 13:59:20 -!- riv has quit (Quit: Leaving). 14:03:24 -!- ais523 has joined. 14:04:23 b_jonas: "cloth/clothe" and "bath/bathe" have different vowels, so they aren't exact matches 14:04:37 the reason "loath/loathe" works is that the vowel is long already, so the e doesn't lengthen it 14:05:27 meanwhile, multiple sources I've checked suggest that α⁸+α⁴+α³+α+1=0 produces the finite field that's used in AES encryption, which confuses me because multiple *other* sources say it doesn't produce a finite field at all 14:13:34 do bath/bathe have different vowels in all dialects? 14:14:11 cloth/clothe ... I never understand how "o" vowels work in English. let me look these up in a dictionary 14:14:16 I think so, at least in British English; the "a" of "bath" varies but it never matches the "a…e" of "bathe", which doen't 14:15:06 yeah, you're right, "bathe" has a long vowel 14:15:16 ...dearest creature in creation... 14:16:28 and yes, dictionary agrees you for "cloth/clothe" 14:16:51 . o O ( Any clothing thoughts? ) 14:16:57 something I've been trying to do in my head is to work out, for each phoneme used by English, a way to unambiguously represent it so that it can't be misread as being some other phoneme (potentially context-dependent) 14:17:45 I'm not sure it's possible; I haven't found any letter sequence that unambiguously encodes the vowel in bye, pie, sigh, etc., when it appears at the end of a word 14:18:08 ("i…e" encodes it unambiguously when separated by a consonant other than "g") 14:18:16 phoneme != phone me 14:19:23 hmm... if there's not enough good pairs, then we should invent a sci-fi word "theron", with fake greek etymology, that is pronounced like "thereon" but with an unvoiced "th" 14:19:38 English orthography reforms are usually doomed. I think both Jan Misali and Conlang Critic have videos about this. 14:20:11 Corbin: this isn't intended as an orthography reform, so much as a way to unambiguously communciate pronunciation of a word to someone who speaks the same dialect of English as you 14:20:45 ais523: as for unambiguously representing each phoneme, usually you just give an example word that is common enough and has that vowel unambiguously enough. there are even lists of such example words for each phonemes, usually to explan what phonemes they're talking about or a specific phonetic notation. 14:21:08 b_jonas: yes, but they struggle sometimes 14:21:08 and I think there was something more like you want, but not that unambiguous 14:21:55 both my Longman and my Oxfor has such example words 14:22:38 https://en.wikipedia.org/wiki/SAMPA_chart_for_English has a list too but possibly not as good 14:22:46 /ɨ/ is a good example of a phoneme it's hard to find good example words for 14:23:08 ais523: Many English words have multiple legal pronounciations depending on position and emphasis. First one that comes to mind is "the", which can start either voiced or unvoiced and use either a stressed vowel or schwa. 14:23:21 ais523: is that a phoneme that appears in an English dialect? 14:23:31 as opposed to non-English languages 14:23:43 because some people pronounce it identically to ɪ and others don't use it at all 14:24:00 If we ignore that topological obstacle, then maybe examine Lojban's inventory of vowels; it has nearly everything an English-speaker could want. (Many other folks are grumpy about the lack of -eu-; nothing's perfect.) 14:24:01 b_jonas: I think the standard example is the second vowel in "minute" (= 60 seconds), when it isn't a schwa 14:24:49 but I pronounce that in a way that I perceive as identical to ɪ 14:25:10 Alan's pronunciation dictionary defines more phonetic symbols than there are phonemes in any dialect, because each symbol can represent a combinations of possibly phonemes depending on the speaker 14:25:53 https://en.wikipedia.org/wiki/SAMPA_chart_for_English has a list of that kind of set of combo-phonemes, sourced from the Wells books 14:26:18 apparently some dictionaries have taken to using ᵻ as something that's somewhere along the ə-ɪ scale 14:28:41 fwiw, I can get a phoneme that I don't normally use by putting my mouth into the position for /i/ ("ee") and then trying to say /ʊ/ (short "oo") without moving it; based on the definitions of Wikipedia that *should* be /ɨ/ but I'm not sure i believe it, as it seems inconsistent with definitions elsewhere 14:31:48 Alan's pron has such a specific combo symbol ê for the second wowel of "minute" the noun in the American pronunciation, and explains that this is sometimes pronounced as the "ship" vowel and sometimes as a schwa 14:32:00 (and possibly sometimes as some other vowel) 14:32:20 though apparently that other vowel is supposed to be /ɨ/ 14:33:51 this finite field thing is bothering me, though 14:34:23 I can understand. I decided to stop trying to figure it out yesterday, but should continue at some point 14:34:24 I think most consistent with the sources is that α⁸+α⁴+α³+α+1=0 *does* produce a valid finite field, but it isn't the one that most people use for GF(2⁸) 14:35:30 it should be possible to figure that out by making a multiplication table and checking if you can find no zero divsors and an order 128 element 14:38:35 that or ask Sage 14:38:56 I'm trying to ask Sage but I can't figure out how to formulate the question, I'm not even sure which field to ask in 14:39:48 the polynomial ring GF(2)[x] factored by that specific polynomial x**8+...+1 (not typing here because I'd typo it) 14:40:44 did you try to check a recent AMD manual? 14:41:25 if all else fails, we can ask on Cryptography SE, but I should make an effort trying to figure this out first 14:41:32 OK, both the standard version (with α²) and the AES version (with α) are irreducible according to Sage 14:42:13 I guess the next interesting question is what value each ring's α has in the other ring 14:42:18 err, each field's 14:42:28 ais523: but does irreducible mean the factor gives the finite field? 14:42:59 because that's what wasn't clear to me, that's why I didn't know which of the fxt tables to look at 14:43:15 yes, if you have any irreducible polynomial, its root gives a valid α to generate a finite field 14:43:29 (whose elements are 0 and the powers of α) 14:43:56 ok 14:44:00 err, assuming that the original polynomial was over a finite field 14:45:30 but in that case, https://www.jjj.de/mathdata/all-irredpoly.txt shoudl be the table to look at, and that table doesn't list 8,4,3,1,0 14:46:11 b_jonas: it does 14:46:19 beneath the "non-primitive" comment 14:46:24 so now all the sources are in agreement 14:46:27 oh 14:46:30 indeed 14:46:42 OK, mystery solved 14:47:29 aha, I see 14:47:46 it seems that a primitive polynomial gives you a root α that generates the entire field 14:48:04 and that's the representation that the x86 GF2P8MULB and other GF2P8* instructions use 14:48:10 and for a non-primtiive polynomial, its root is not a valid α because it only generates a subset of the field, but you can take a polynomial in its root to get the whole field 14:49:17 ok 14:49:29 for the AES field, you have to add 1 to the root of its polynomial to get the typical α (thus α is 3 in the usual encoding, rather than 2 like it is for most binary finite fields) 14:50:32 so is either x or x+1 an order 128 element in all the representations that you get from irreducible polynomials? 14:50:42 sage: y. = GF(256) 14:50:44 sage: a = x+1 14:50:45 sage: a**8 + a**4 + a**3 + a**1 + 1 14:50:47 0 14:50:57 err, I named a and x the wrong way round, but you get the idea 14:51:14 b_jonas: I'm not sure, my guess is no but it's just a guess, not even an educated guess 14:53:57 ok 14:54:14 but what does this have to do with AES? 14:55:11 so the reason Intel picked that particular finite field representation is that AES 14:55:20 's S-box is based on it 14:55:20 hmm, apparently AES includes calculations in GF(2) in that representation 14:55:52 ais523: but the AESNI extension with the AES* instructions is older than the GF2P8* instructions 14:56:12 b_jonas: the common theory is that because the AES instructions existed already, the processor had circuitry for handling one specific finite field already 14:56:20 so it was easy to expose it in a way that wasn't tied to AES 14:56:40 or perhaps it's used in other crypto primitives too, and the AESNI instructions can't help with those 14:57:15 and one of the GF2P8 instructions can apparently be used to convert between finite field representations, so you can have any GF(256) representation you like (as long as it's polynomial-based rather than logarithm-based) 15:00:04 -!- APic has quit (Read error: Connection reset by peer). 15:00:39 -!- APic has joined. 15:01:16 while PCLMULQDQ multiplies two GF(2) polynomials, from two 64 bit long polynomials to a 128 bit long result, and doesn't try to do the reduction 15:02:03 right 15:02:15 let me check the CRC32 instruction too 15:04:06 I think that one does a GF(2)[x] multiplication by x**32 then a reduction using a specific 32-bit polynomial 15:04:16 for computing CRC32 checksum 15:04:49 it's just so weird seeing a finite field representation where the generating element and the root of the polynomial are different, it becomes non-obvious which one gets to be called α 15:04:50 but I'm not sure 15:05:06 no wonder the AES field isn't the one that all the finite field software standardised on for GF(256) 15:05:39 or maybe not, I'm not sure 15:06:21 ais523: is the representation that they standardized the same as the representation in the IOCCC entry? 15:06:32 which IOCCC entry? 15:06:53 the answer may well be "I can't tell" unless the hint file is good, though, given that the question may require me to understand obfuscated C 15:07:42 http://www.madore.org/~david/weblog/d.2012-10-14.2083.html#d.2012-10-14.2083 15:08:01 the description of how it works is in a comment of David's non-obfuscated program 15:08:16 that program is at ftp://ftp.madore.org/pub/madore/misc/shsecret.c 15:12:21 this multiplication table has 2×2=3 (where the numbers refer to the internal representations of the finite field elements, not the regular 2 or 3) 15:12:31 I also don't remember what representation my https://www.perlmonks.com/?node_id=863110 or the inspiration https://www.perlmonks.com/?node_id=862789 uses, I'll have to re-read those 15:12:41 so I think it's using a very unusual representation of GF(256) 15:12:55 (there's only one GF(256), the interesting part is how you order the elements) 15:12:56 Hm. I wonder if this is an NSA/CIA situation, where the AES field's primitive operations can be used for something other than typical encryption or decryption of AES. 15:13:41 Somebody must have had a use for these instructions. It's curious that Intel doesn't explain how to use them for arbitrary work. 15:13:50 oh, those both represent GF(2**7), not GF(2**8) 15:14:35 even so, every binary field larger than GF(4) (which doesn't have an element numbered "4") has 2×2=4 in the most common representation, also the second-most common representation 15:14:36 Corbin: it's not the Intel manual's goal to explain math or cryptography 15:15:19 Fair. 15:15:23 Corbin: they also don't teach you all about how to multiply matrices or complex numbers or quaternions with their mul-add and add-subtract instructions 15:15:43 and especially not teach how to use matrix multiplication or complex numbers for anything useful 15:15:59 Corbin: all representations of GF(256) are equivalent and fairly easy to interconvert (you just need to know which element in one field is the α of the other), so circuitry implementing AES isn't going to be inherently AES-specific, you could use it for anything based on GF(256) 15:16:15 and that's the functionality that Intel's exposing now 15:17:19 ais523: That makes sense, I guess. What are you stuck on? 15:17:28 we aren't any more 15:17:42 we were stuck on some apparent inconsistencies between sources but they've all been resolved now 15:17:45 ok, now I'm trying to imagine the Intel manuals with three extra volumes for numerical methods of partial differencial equations in the middle 15:18:15 the representation AES uses for GF(256) is mildly weird, but not ridiculously so 15:18:20 ais523: I don't think that's necessarily right, doesn't AES mix those field operations with other things, or at least map the input to it somehow? 15:19:19 b_jonas: yes, but the circuits for the field operations apparently existed already, there was just no way to use them in isolation 15:19:38 I remember that https://www.perlmonks.com/?node_id=862789 switches between two representations of GF(128): a sane linear one, and a logarithmic one where you add the representatives mod 127 to do a multiplications 15:20:04 and I think it computes the logarithm table for that by repeatedly multiplying with a specific field element, because that's easier to code than a general multiplication 15:20:09 those are both sane, and fairly natural 15:20:16 the polynomial representation is more commonly used though 15:20:32 just like multiplying fixed integers by a constant integer is easier than multiplying two arbitrary integers 15:21:11 ais523: possible, the logarithmic one certainly wasn't natural, I remember the idea was new and very odd to me when I decoded how that obfuscation works 15:21:31 I do admit that it's a good representation in the sense that it did allow martin to golf their code 15:21:42 b_jonas: it's something I thought up independently after learning about finite fields (although I thought of the polynomial representation first) 15:21:46 so it was natural to me 15:21:49 ok 15:21:57 my program, which I wrote later, only uses a linear representation 15:22:29 but I don't know how it works 15:22:51 now I have to try to understand my own obfuscated code\ 15:23:21 -!- riv has joined. 15:25:01 -!- APic has quit (Read error: Connection reset by peer). 15:25:40 but... that code makes no sense! why does it shift one bit left then extract the high bit? 15:25:50 -!- APic has joined. 15:25:54 why doesn't it just extract the sixth bit instead of shifting and extracting the seventh? 15:27:22 is it an in-place shift, that leaves its input shifted? 15:27:49 oh yes, it shifts the input in place, and also outputs its high bit 15:27:51 that makes more sense 15:30:10 $v must be the value substituted in the polynomial 15:32:25 my interest in finite fields is mostly related to error-correction algorithms 15:33:10 so h($r) shifts $r in place, returns the carry, and so $r ^= h($r) & "\x217" multiplies the polynomial by x then propagates the carry back so that, I think, x**8 = x**7+x**3+x**2+x+1 15:34:32 «& "\x217"» looks wrong, you'd want the \x217 to be a single character but I think it parses as \x21 followed by 7, and even if it doesn't, the value is too large 15:34:51 yes, sorry, it's "\217" 15:35:01 I just typed it to IRC wrong 15:40:01 -!- APic has quit (Read error: Connection reset by peer). 15:45:26 -!- APic has joined. 15:50:04 -!- tech_exorcist has joined. 15:51:51 -!- hanif has quit (Ping timeout: 276 seconds). 15:54:28 hi 15:54:49 does someone has some good argument to defend the choice of calling python's functools.reduce reduce and not fold? 15:54:57 does someone have* 15:55:45 as far as I understand, functools.reduce really behaves like haskell's foldl or ocaml's fold_left, and really doesn't behave like pyspark.reduce 16:00:53 I don't have a good argument. I wonder why MLs used "fold" and not "kata", though? Since foldl really behaves like a katamorphism. 16:01:33 I figure that Python and PySpark both use "reduce" for the same reason as MapReduce. 16:02:02 yes, but that's my point 16:02:08 pyspark's reduce is exactly mapreduce's reduce 16:02:17 but python's reduce is not 16:03:01 if python was statically typed, it'd be obvious from the type signature - python's functools.reduce would have the same signature as a fold_left 16:03:49 whereas pyspark.reduce is much more constraining - the function must be (a, a) -> a, where a is the type of the elements in the list 16:04:30 pyspark.reduce is basically "pop two elements at random; apply the function to those two elements; push the two elements back into the list; repeat until the list has only one element" 16:04:32 Python also has sum(), which can be made to behave like a fold on lists. 16:05:21 Hm. So, on one hand, yes, it looks like one of those is a commutative operation and the other is not. 16:05:28 Koen: I think the name "reduce" comes from smalltalk 16:05:38 no wait 16:05:43 in smalltalk it's called inject? 16:05:45 I don't know 16:06:08 fold, reduce, inject, any of those is fine. insert is a bit too much. 16:06:15 But OTOH we could find the data structure at fault; if we take a fold on a list, but change the list to a set, then the fold automatically has to commute, or else definitionally it's not folding a set. 16:06:59 Corbin: don't you mean associate instead of commute? 16:08:15 i think reduce is perhaps a more intuitive name to someone who isn't steeped in fp 16:09:17 fwiw, I use the term "associative fold" for (a, a) → a style folds where the order of folding doesn't matter 16:09:46 b_jonas: I think that a fold on lists has to already be associative? I mean that the order of values within the container has been forgotten. 16:09:47 does Perl have a fold as a builtin? I know it has a map and a filter 16:10:16 ais523: no, not as a builtin. I think there's one in a module packed with it. 16:10:16 Corbin: it doesn't, left-fold and right-fold exist as defined evaluation orders for folding and many practical folds care about the difference 16:10:27 Corbin: if it was associative, we wouldn't distinguish between fold_left and fold_right except for performance issues 16:10:33 but it has a for loop and mutable lexical variables as builtins 16:10:44 List::Util::reduce is the one from a module 16:11:00 Perl 6 has a strange fold builtin. 16:11:18 It takes an operator and uses the associativity of the operator (left or right) to decide the associativity of the fold. 16:11:26 I guess it's not called Perl 6 anymore. 16:11:29 ais523: Yes, but I'm talking about the underlying recursion scheme. Like, a list is a free monoid, which means that list concatenation is associative. Similarly, there's free rules for list folds when we are folding over a concatenation. 16:11:38 I'm an imperative person, I just use for loops 16:12:08 Koen_: We *do* only care for performance reasons! Any "right" fold can be turned into a "left" fold by composing with a list reversal. 16:12:22 fair enough 16:12:43 Hmm, in Haskell list is not a free monoid. 16:13:01 -!- imode has joined. 16:13:02 Corbin: I don't think folds on a list have to use an associative operator. we do fold lists with subtraction to get an alternating sum. 16:13:25 Corbin: even if the list itself is associative, the operation you're folding over it might not be 16:13:26 why not ? :( 16:13:26 but sure, most of the operators you fold with will be associative 16:14:01 b_jonas: in my experience, the operators I fold with are often *not* associative 16:14:07 oh, and we fold with floating point addition too 16:14:10 in particular, the type of the left argument and right argument often differs 16:14:11 I think the thing Corbin is getting at is that a set isn't supposed to expose the order it stores elements in. 16:14:20 ais523: aren't those reduces instead of folds? 16:14:22 An unordered pair is similar. 16:14:25 so, if you fold with the wrong associativity you get type errors 16:14:27 you could call them folds of course 16:14:36 b_jonas: it's basically an iteration that gets to modify a mutable variable as it goes 16:14:44 ais523: yep, a for loop 16:14:48 except functional 16:14:52 If you have a canonical order on a type, you can just expose the elements in that order, but if you don't, you still want fold {a,b} = fold {b,a} 16:15:10 ais523, b_jonas: This is a good point. I'm curious whether those are genuine katamorphisms. This is a good example of when engineering and theory are using the same jargon for different purposes. 16:16:03 Sure. 16:16:09 I guess evaluating a polynomial, with its coeffs in a list, is also a non-associative fold 16:16:16 Well, presumably you want "katamorphism" to mean an initial F-algebra's unique thingy. 16:16:32 Yeah. To distinguish from paramorphisms, at a minimum. 16:16:36 `? catamorphism 16:16:38 A catamorphism is when you recurse too greedily and too deep. 16:16:45 Those names are all silly in my opinion. 16:16:53 no, a catamorphism is when a cat becomes amorphous to flow throuhg a small gap 16:16:58 shachaf: hmm… suppose you have a set of strings representing integers, and you want to calculate their sum 16:17:11 you can map a parseInt function over the set, then associative-fold over it with addition 16:17:23 or, you can left-fold a "parseInt then add" function over the set, starting at 0 16:18:00 this produces the same result regardless of the order the set is read in, *but* the fold operation isn't associative, which is why you have to left-fold (you would get a type error instead with any other ordering) 16:18:18 nah, the sum is already printed at the bottom of the recipt, you don't have to add themup 16:18:53 Hmm, there are multiple things people mean by "fold", even in a concrete setting like Haskell. 16:19:23 b_jonas: not even if you're implementing the receipt printer? 16:19:24 In Haskell it either means the structural thing for a type, like foldr for a list, or the thing from Data.Foldable, which is always about "sequence"-style folding. 16:19:48 shachaf: yes, there's the non-associative folds for lists, and there's a typeclass for trees that you can fold over in a way that had better been associative 16:19:56 yes, those 16:20:02 ais523: then you don't have strings 16:20:12 no, a catamorphism is when a cat becomes amorphous to flow throuhg a small gap ← https://nhqdb.alt.org/?1404 16:20:37 I like kittens. 16:20:38 ais523: besides, the sum is more authoritive then the details above, because the total is required to be printed there for tax purposes, the itemized bill is an optional bonus 16:20:55 b_jonas: I don't think you can necessarily assume that the receipt printer isn't storing the prices as strings 16:21:02 lol 16:21:14 this sort of software is famous for being badly programmed 16:21:27 Most software is famous for being badly programmed these days. 16:21:29 ais523: strings? those recipt printers existed way before strings 16:21:52 that said, in one supermarket near me, the software automatically regroups the items you've purchased in order to take the least possible advantage from special offers 16:22:00 (which is the reverse of what the customer would want, but useful for the store) 16:22:27 -!- oerjan has joined. 16:22:27 damn 16:22:40 the first ones I saw could only print numerals and a few symbols from their type loop, and then for the header you'd insert a separate stamp template that it stamps on 16:22:55 ais523: hehe, that's odd 16:23:01 -!- Trieste has joined. 16:23:20 the ones I see here usually preserve the order that the items were entered 16:23:23 this means that sometimes I would put a purchase through as two separate transactions 16:23:35 even the all-digital ones that print a whole page together at the end of the transaction 16:23:36 in order to get a lower price 16:24:09 ais523: This reminds me of banks reordering transactions from largest to smallest to maximize overdraft fees. 16:24:12 (I explained to the cashier that the tills were adding up the bills incorrectly unless I did that) 16:24:15 Which I think US banks sometimes do, at least. 16:24:30 shachaf: wow 16:24:53 shachaf: I've been told (but don't know from personal experience) that some credit cards charge interest based on the maximum amount of debt you've been in since the last time you weren't in debt 16:25:00 -!- APic has quit (Read error: Connection reset by peer). 16:25:04 so that partially paying off a debt does nothing and you have to clear it to avoid paying interest 16:25:09 Big USA banks are terrible; use credit unions when possible. 16:25:11 I'm not sure whether to believe this or not 16:25:31 -!- APic has joined. 16:25:59 ais523: yes, I think mine does that, but not quite phrased that way: it collects all the payments for a month, and you can only clear the debt off after the month ends, even if you actually transfer the money in advance 16:26:25 (the offset of when each month ends depends on the type of the card) 16:26:31 at least in the UK, I think credit card companies aren't allowed to charge you interest if you clear the debt immediately 16:26:52 ais523: yes, they don't charge me interest if I pay within a given period after the month ends 16:27:15 Hmm, I think the methods US credit cards can use to calculate interest are very regulated now. 16:27:21 they do still charge a fixed yearly fee for having a card 16:28:02 oh, here they mostly don't charge fees for having a card, basically because their competitors don't 16:28:25 and they can't do it stealthily because one of them got sued really hard for not making the existence of a charge clear 16:28:30 ais523: I think there exist credit cards here that don't charge a fixed fee 16:29:07 One thing that I think is true is, if you have no credit card balance, you get about a month of no interest to pay a balance before it starts accruing interest. But once you're in the interest state, new purchases start accruing interest immediately, until you back down to $0 for a while. 16:29:12 I don't know the details. 16:30:12 In the US credit card companies compete for customers by paying them all sorts of money in various forms. I think that's much less true elsewhere. 16:30:37 It must be a very profitable business, presumably partly at the cost of merchant fees and partly at the cost of people paying a lot of interest. 16:31:36 shachaf: they advertise that they pay you money, but in many of the cases there's small print that means they pay very little money or only in very specific conditions 16:32:22 or they're credit cards rebranded by a particular airline or supermarket chain, in which case they pay you money if you buy in their brand of supermarket 16:32:39 or something like this, I haven't delved much in the details 16:33:05 they also spend a lot on marketing, they send agents everywhere who try to sign you up for free credit cards 16:33:55 the ones that ask "excuse me Sir, do you work in Hungary?" in supermarket are generally trying to sell one of those, but there are also other opening lines 16:35:35 I admit it's a good opening question, it distinguishes students with no income from people who probably qualify for a credit card 16:37:03 b_jonas: is the "in Hungary" to make the question less rude by making it ostensibly about something else? 16:38:39 ais523: I'm not sure. I thought it's because it's more complicated for people to apply for those credit cards if they have an income abroad. Yes, I know there are supposed to be EU laws about this sort of thing, but EU principles and practice often doesn't seem to match 16:39:04 but yes, it might also be because it sounds less rude 16:39:35 or more easy to understand that they're not trying to hire you for a seasonal job or something 16:40:20 I assume that there are few people shopping in Hungary who actually work outside Hungary, so it probably wouldn't be worth asking about unless you want to reassure people about the intentions behind your question 16:40:43 probably not many who live in Budapest, yes 16:41:08 (there would be more close to the Austrian border) 16:43:20 -!- tech_exorcist_ has joined. 16:43:24 When people ask me questions about being local, it's usually related to voting for things. 16:43:44 -!- tech_exorcist has quit (Quit: Goodbye). 16:43:49 -!- tech_exorcist_ has quit (Remote host closed the connection). 16:43:55 hmm, I don't remember what opening questions those people have, they usually have party color decorations 16:43:57 I'm not sure I've ever been asked a question about being local, in contexts where people knew my physical location 16:44:05 -!- tech_exorcist has joined. 16:44:14 oh, I have been asked if I'm local 16:44:19 people ask that when they try to ask for directions 16:44:33 such questions are sometimes perceived as racist, because they tend to be disproportionately asked to ethnic minorities (for an ethnic majority people just assume they're local, generally) 16:45:09 -!- chiselfuse has quit (Ping timeout: 276 seconds). 16:45:43 no, backwards because they don't even ask people who they don't think are local, because there's usually a choice of other people to try to ask directions from 16:46:01 mind you, you can still count that as racist if you want 16:46:22 but they don't need to ask that question for that 16:46:29 -!- chiselfuse has joined. 16:46:54 They often ask "are you a California voter?". I think the goal is to get signatures from voters in order to get things like ballot proposals, which require some number of signatures. 16:47:06 I mean propositions. 16:47:15 possible 16:47:50 -!- immibis has quit (Remote host closed the connection). 16:48:02 the questions I remember having been asked are "have you voted yet?" (on the day of the election), and "have you heard of yet?" 16:49:21 -!- immibis has joined. 16:49:48 the UK has really strict rules about campaigning 16:50:38 in the run-up to the election you have to give a platform to every party if you give a platform to one (with various rules about weighting major versus minor parties), and as voting opens you can't publicly do anything related to campaigning at all 16:51:01 (you can still talk to friends privately about the election, but can't, e.g., talk to random people on the street about your party preferences) 16:51:30 news reports on the day of an election, while voting is open, can be pretty funny because it's obviously a major story that they need to cover, and yet they aren't allowed to say anything substantive at all 16:51:37 lol 16:51:50 so the stories are along the lines of "here's the most interesting dog we saw outside a polling station" 16:52:02 even from normally serious news organisations 16:52:26 we're having a recall election for governor 16:52:35 it's structured in a strange way 16:53:10 ais523: I think when voting starts (and shortly before) they are no longer allowed to solicit strangers to vote on a specific party, but they can still solicit people to go voting if they don't specify who for 16:53:11 ais523: At least the HMRC self assessment system optimizes the foreign tax credit relief deduction order in the way that maximizes the benefit from it. (Then it allows you to reorder if you wish.) 16:53:34 question 1 is whether to get rid of the current governor (yes/no); question 2 is who the replacement should be (from a list of 46 candidates, not including the current governor) 16:53:41 They also round to integer £s always in the direction that's more advantageous to the taxpayer. 16:53:57 so it's quite possible that the current governor will be replaced by someone who got far fewer votes than he did 16:54:34 and the governor says we should leave question 2 blank 16:54:41 as for news stories, they usually say things like "in , an old got sick in the voting room so voting was suspended for 5 minutes. in the seal of the voting box got damaged, so voting got suspended for 20 minutes until they bought a new certified sealed box." 16:54:53 even though you are allowed to vote "no" and still vote question 2 16:54:59 keegan: that is a bad idea if you like the current governor, it'll just guarantee that you'll get someone who his supporters dislike 16:55:30 also news about how big queues are in certain polling stations, usually the stations where people who vote from a town far from their home (but still within Hungary) are sent to vote 16:56:23 the last time we had one of these, the lieutenant governor ran as a replacement and got second place, and i guess they think that having him on the ballot boosted support for "yes" 16:56:47 so this time there are no credible replacements from the governor's own party and they are not endorsing anyone for question 2 16:56:55 keegan: it may be a good idea for the governor himself, but not for the voters 16:57:22 also there's an interpretation of the state constitution where we shouldn't even have question 2 and the lt gov should automatically become governor if the governor is recalled 16:57:47 (since that's what happens if the governor dies or resigns, and the constitution says that a recall should include an election for a replacement only "if appropriate") 16:57:59 so maybe if he loses they'll challenge the vote 16:58:03 it's all very strange and californian 16:58:09 this state really suffers from an excess of democracy 17:00:05 people ask that when they try to ask for directions <-- hm, in norwegian there's a different idiom for pre-asking that, "er du kjent her", which means more or less "to you know this place" with no necessary implication of whether you live there 17:00:26 ais523: re "sometimes I would put a purchase through as two separate transactions" => are the rules such that you have to solve hard computational problems to optimize it? 17:00:53 b_jonas: fortunately no, I think this is O(n log n) with the sort of special offers that were available 17:01:33 also if replaced the replacement will serve for only one year because we are due for a normal gubernatorial election next year 17:02:35 and the likely replacement is a far right talk radio guy who has no chance in a normal election 17:04:35 since it is a very left leaning state 17:06:31 hmm, I could try to make a third dairy-themed programming task from that. you want to follow your grandmother's recipe to the letter, and it calls for $n grams of kefir. the supermarket sells cups of kefir in several different sizes, with weights @a gram each. how much of each type do you buy? 17:07:04 -!- hanif has joined. 17:14:59 keegan: i saw scott alexander write about that mess, supposedly all the democratic candidates were quite terrible because of that strategy of no one important running. this sounds to me like it would really have benefitted from transferrable votes 17:15:31 at least they could then maybe get one of the less horrible republicans 17:16:46 like biden? oh wait, he's horrible 17:16:57 int-e: he's not republican hth 17:17:02 ... 17:19:44 also according to scott aaronson, he's better than a rutabaga, so still beats trump 17:20:34 but that was a different election and you'll just have to watch the mess 17:20:54 He's still somebody who would fit right into the Republican party. 17:21:02 Which was the point. 17:21:40 int-e: i didn't know that. but i haven't really paid attention to him. 17:23:03 (also technically i still don't know that) 17:23:08 He may have some radical ideas regarding gun control that would feel out of place. That's the only thing I can currently think of... eh I'm no expert either. 17:23:18 trump passed more gun control than obama 17:23:25 I should probably figure out the least evil for Germany. 17:23:37 "I like to take the guns early. Take the guns first, go through due process second." -- Donald Trump 17:23:54 of course don't mention that quote to any of the trump worshipping gun nuts 17:24:10 int-e: i have this thought lately - whenever you choose the lesser evil, someone has just played good cop/bad cop on you 17:25:02 indeed 17:25:25 the ruling class are all friends behind the scenes, regardless of party 17:25:28 Oh sure, elections are a carefully managed illusion of having /some/ impact on the political process. 17:25:30 and big corporations donate to both 17:25:42 It's hard not to be cynical about it. 17:25:59 meanwhile the most destructive and evil parts of american society (the military-industrial complex, mass incarceration, the drug war) have essentially bipartisan agreement 17:26:13 although the last one is falling apart a bit. but our prez still thinks legalizing weed is too dangerous 17:27:37 both parties kept us in afghanistan for 20 years just like both parties kept us in vietnam for about as long (if you count various covert and "assistance" operations going right back to the fall of the french colonial government) 17:28:04 i think people pay way too much attention to elections 17:28:33 like, you might as well vote because it's easy and gives you a little influence for free. but if you consider yourself "politically engaged" and yet most of that energy goes into arguing about who to vote for, then you're doing it wrong 17:28:49 especially arguing about who to vote for at the national level 17:29:09 local politics has so much more direct impact on people's lives, and is also a lot easier for an individual to influence 17:33:05 i'm so exhausted from electoral politics and being told that every election is an existential battle for the future and yet no matter who wins nothing really changes 17:34:24 even right now with democrats in "control" of the white house, the senate and the house, they can't really get anything done 17:34:48 and so if climate change or the rise of fascism really is an exstiential risk then the solution to that problem must lie outside electoral politics 17:35:22 but people shy away from this because their existential risk rhetoric is only intended to scare you into voting blue no matter who 17:35:51 "i'm so exhausted" -- this means it's working as designed :P 17:39:48 yeah 17:40:00 -!- APic has quit (Read error: Connection reset by peer). 17:40:24 -!- hanif has quit (Ping timeout: 276 seconds). 17:49:30 -!- hanif has joined. 17:56:39 oerjan: '"er du kjent her"' > curious what does this mean literally? 'are you known here' doesn't make sense 17:57:07 -!- APic has joined. 18:12:07 -!- Sgeo has joined. 18:12:42 hanif: that is the literal word-for-word translation, but "kjent" works weirdly, as if it's ambiguous whether it's active or passive 18:13:56 "kjent mann" - a man who is known, "kjentmann" - a man who knows (the area) 18:15:21 hm 18:16:11 Hmm, english doesn't have a "know"-derived word for someone knowledgable, does it... 18:16:44 . o O ( A knowledger. <-- well we can always try to make something up ) 18:16:54 https://en.wiktionary.org/wiki/kjent#Norwegian_Bokmål has both meanings 18:17:05 "knowledge" is already a crutch though. 18:17:19 int-e: knower. 18:17:24 I guess. 18:17:44 German has "bekannt" for "known", "Kenner" for someone who knows. 18:18:00 it doesn't speak to the quantity of knowledge but knower is pretty close. 18:18:13 imode: Yes, that makes more sense grammatically. It's still not a word, unfortunately. 18:18:22 "kjenner" is also norwegian, but it's a noun while "kjent" is an adjective (or participle) 18:18:25 (At least not one I'm aware of.) 18:18:34 huh? 18:18:36 knower is a word. 18:18:57 "bekjent" exists and means "acquaintance" :P 18:18:59 Never heard it. 18:19:19 oerjan: Oh we have that too, "Bekannter". 18:19:21 considering you were asking for it, I'm not surprised. :P 18:19:32 oerjan: Just with a bit more grammar, I guess. 18:19:35 if you didn't know it, you wouldn't ask the question! 18:20:40 also in other cases en:know = no:vite (~de:wissen) 18:20:49 it's apparently an agent noun of `know`. interesting, I wonder what other kinds of agent nouns exist for common verbs that don't "point" trivially to them. 18:20:59 imode: I've found this, https://www.urbandictionary.com/define.php?term=knower ...no dictionary entries. 18:21:04 huh? 18:21:15 https://www.thefreedictionary.com/knower 18:21:25 collins english dictionary. 18:21:40 it's also apparently a music duo. 18:22:20 and in wiktionary https://en.wiktionary.org/wiki/knower 18:22:44 does an agent noun exist for every verb, then? 18:22:57 seems grammatically correct to just add "er" to every verb. 18:23:07 or "or". 18:23:15 Never heard it. <-- it's weird how in english words that make perfect sense can sometimes not exist for no good reason. 18:23:19 mayve just transitive verbs? 18:23:29 having trouble coming up with a counterexample. 18:24:15 huh. 18:24:26 yeah you could form an agent noun out of any verb. 18:24:30 that's neat. 18:26:41 snow? 18:26:56 'snow business of yours 18:27:05 rain too 18:27:08 snower. rainer. 18:27:12 although wiktionary has an entry for snower, apparently it's for another verb sense i wasn't familiar with 18:27:29 apparently snower is a DBZ character so I defer to toriyama. 18:27:49 https://en.wiktionary.org/wiki/snower 18:28:10 https://en.wiktionary.org/wiki/rainer <-- oh this one actually makes sense but I've never used it.. I think I might. 18:28:19 a showerhead is technically a rainer! 18:28:43 a cloud is technically a rainer 18:28:55 they do a damn good job. 18:29:43 anyway, know-it-all or expert are more common terms :P 18:30:42 those kind of denote the quantity of knowledge/status of the user more than "this dude knows, he is a knower". 18:30:42 curiously, i learned that in ancient greek, the verb for snow (χιονίζω) could be used actively - referring to a god 18:32:51 snow is rather powerful 18:34:02 Have I ever mentioned http://www.snowbynight.com/pages/ch1/pg1.php I wonder... (it's a story comic, more romantic than funny) 18:34:43 int-e: the snow wasn't the god, the god was throwing the snow 18:35:49 avalanches are pretty impressive too 18:35:56 i too have thrown snow :P 18:36:34 don't get me wrong, it's cool 18:36:56 it's just pretty common to associate forces of nature with gods :) 18:40:22 `learn hanif may be a god, or maybe they're just snowing us. 18:40:26 Learned 'hanif': hanif may be a god, or maybe they're just snowing us. 18:41:01 i got a pun for the wisdom, it's all good. 18:42:43 ironic also because 'ḥanif' (in arabic) means one inclined to monotheism 18:43:12 tricky. 18:45:25 -!- ais523 has quit (Quit: quit). 18:50:05 -!- APic has quit (Read error: Connection reset by peer). 18:52:17 `? oerjan 18:52:18 Your omnidryad saddle principal ideal "Darth Ook" oerjan the shifty eldrazi grinch is a punctual expert in minor compaction. Also a Groadep who minces Roald Dahl. He could never remember the word "amortized" so he put it here for convenience. His arkup-nemesis is mediawiki's default diff. He twice punned without noticing it. 18:52:25 That is way too many modifiers. 18:53:04 istr you added most of them hth 18:53:44 then again, i'm perpetually confused. 18:55:56 -!- APic has joined. 18:55:57 . o O ( `slwd oerjan//s/ / highly adjectivated/ ) 18:56:56 -!- Koen_ has quit (Remote host closed the connection). 18:58:55 is there a know-worthy non-default diff in wikipedia? 18:59:42 -!- Koen_ has joined. 19:07:00 int-e: i use wikEdDiff 19:09:27 which is the only one i know that is listed in preferences 19:10:08 and is usually far clearer than the default, but sometimes locks up on big changes 19:11:46 (i mean, the default cannot even handle insertion of a blank line before a paragraph with a few changes sensibly...) 19:15:00 -!- APic has quit (Read error: Connection reset by peer). 19:15:18 -!- hanif has quit (Ping timeout: 276 seconds). 19:15:19 hm, i vaguely recall wikEdDiff or something similar _is_ the default if you use the visual editor rather than editing markup 19:21:40 -!- hanif has joined. 19:26:15 -!- hanif has quit (Client Quit). 19:32:13 -!- APic has joined. 19:49:36 so visual editor has some worth after all 19:53:55 int-e: oerjan: I’m still confused why there isn’t a blame feature (like in “git blame”) for at least MediaWiki (but I think many other popular wiki engines lack that too?) 19:54:48 arseniiv: it may actually have to do with the sheer amount of processing that would require; it is a website after all 19:55:14 but it's probably also very messy... because edits are often very small 19:55:33 searching for who made something foolish in an article is usually a no go for me. I tried it several times and I think every time I confused the author of the change with someone right behind or after 19:55:47 ('blame' needs diffs through thew entire history at once) 19:56:12 arseniiv: bisection works well enough, it's just tedious 19:56:24 very tedious! 19:56:42 like, it still could be automated to some degree 19:57:04 I wouldn't be surprised if there were browser add-ons for that 19:57:32 but I'm a very casual wiki history investigator 19:57:42 I do it maybe once every two months 19:58:32 though couldn’t a source be tagged with which segment is by whom (predicated on that the diff used is a wise one) 19:59:20 I almost never do that at all, but each time curiosity wins… ugh 20:02:30 there could be an option to erase all tags when an article gets a stable version checked by someone omnireadent 20:02:59 so there wouldn’t be too much garbage because of blame-tagging 20:07:47 -!- Everything has joined. 20:09:01 -!- arseniiv has quit (Quit: gone too far). 20:16:42 so visual editor has some worth after all <-- you can enable wikEdDiff separately in the preferences 20:17:09 -!- riv has quit (Quit: Leaving). 20:20:05 -!- oerjan has quit (Quit: Nite). 20:38:10 -!- arseniiv has joined. 20:39:37 -!- Koen_ has quit (Quit: Leaving...). 20:41:48 fungot: what's a good chunbk size for file transfers these days? 20:41:49 int-e: but you get what i was worrying whether or not that qualifies as " invent" 20:52:44 -!- Lord_of_Life_ has joined. 20:54:40 -!- Lord_of_Life has quit (Ping timeout: 240 seconds). 20:55:20 -!- Lord_of_Life_ has changed nick to Lord_of_Life. 20:58:23 fungot: Can't you just once answer a question? 20:58:24 fizzie: ( a b)) work and i'm not particularly keen on. 21:01:28 fungot: Please respond with gibberish. TIA. 21:01:28 int-e: no amount of my expenditure... being a pita. i need to beat fnord 3. i just am pulling an fnord 21:01:53 <3 lowering standards 21:04:07 I wonder if fnord 3 is a game. 21:05:13 It would seem to have been Onimusha 3. 21:05:25 . o O ( It's the first in a series. The title was chosen in anticipation of the eventual prequels. ) 21:05:48 From #scheme, which is usually the most on-topic of the three channels the `irc` style is composed of. 21:05:59 Let's pretend I wrote that before you matched it with the logs. 22:46:40 -!- arseniiv has quit (Ping timeout: 240 seconds). 23:10:36 -!- tech_exorcist has quit (Quit: Goodbye). 23:17:49 int-e: can you be more specific? what chunk? chunk in an IP packet? chunk in a TCP stream or HTTP query? file on a removable storage device? 23:18:04 Man, the Glushkov construction for NFAs is so good. 23:18:50 -!- src has quit (Quit: Leaving). 23:19:03 -!- src has joined. 23:22:43 b_jonas: it's not really interesting 23:26:59 tbf I'm asking fungot just as stupid questions sometimes 23:26:59 b_jonas: so i have to make the sexpr encoder might have been voodoo code for protecting the software you've written? i don't know; i've never thought about how to make an image set... doing it manually but that doesn't necessarily minimize operations though 23:28:29 If a bot does something "manually", isn't it still automated in a sense? 23:31:41 fizzie: https://xkcd.com/2173/