00:03:11 -!- augur_ has joined. 00:04:46 Should I read the 80386 manual to start learning assembly? 00:04:52 OSDev wiki seems to suggest that 00:05:00 http://wiki.osdev.org/Learning_80x86_Assembly 00:05:05 -!- augur has quit (Ping timeout: 240 seconds). 00:06:17 Sgeo_: maybe. but don't forget that cpus have changed a lot since the 386. they have out-of order execution and speculative and rollback and crazy branch condition and target prediction and multi-level caches and vector registers (which are the new general registers) and 64-bit mode and all that stuff now. 00:06:53 oh, and register renaming too 00:06:56 not all english-speaking countries have useful libraries 00:07:02 and fast-multiplication 00:08:44 Sgeo_: do you want to learn how to write assembly that works or assembly that runs fast? 00:08:58 I want to understand CPUs better. 00:09:01 assembly optimization is pretty crazy nowadays, mostly because processors no longer work anything like asm assumes they do 00:09:26 modern x86 basically recompiles the asm into an entirely different format on the fly 00:10:00 ais523: I personally care about the x86 cpu to learn how to write compiled code (C or C++ etc) that can run fast. it's rare that you actually have to write assembly, but knowing what the cpu does and caches and stuff helps when you write compiled code. 00:11:43 it's hard to know how to understand CPUs better nowadays 00:11:58 I guess looking at pathological cases and understanding why they're pathological can help 00:12:14 I suppose you could start with agner fog's optimization manual 00:13:03 (namely, the microarchitecture manual) 00:13:25 -!- augur_ has quit (Read error: Connection reset by peer). 00:14:16 ais523: you don't need to understand it perfectly, but knowing something about it can help 00:20:12 one other approach, which is a bit more relaxing, is to read mysticial's stack overflow answers 00:20:21 I would want to instead design the computer better, to not be so complicated and confusing like that. 00:22:19 Some of the complicatedness have really good excuses. Like, you know, speed of light and how big atoms are. 00:24:13 Yes, but I think there are better ways to deal with that. 00:24:29 with the speed of light? 00:24:35 or with the size of atoms? 00:25:13 less speed of light and more speed of electric charge through a medium. :P 00:25:27 but I guess if you did optical computing.. 00:26:19 imode: that matters only in the short deep circuits, like within the cpu. for between the main ram and cpu where there's only wires, no logic, the propagation speed is basically the speed of light. 00:27:01 You shouldn't put the out of order execution, complicated caching, speculation/rollback, etc. Make any stuff being doing explicily as part of the program instead. 00:27:06 then there are dumb complications, like how every x86 chip has three different types of floating point units 00:27:22 what? it's all subject to the speed of electric charge through a medium. transferrance of charge is NOT the speed of light. 00:28:03 (apparently gcc defaults to doing all floating point in sse now, if you use -ffast-math) 00:28:08 regardless of whether or not there's logic in the way. logic only adds switching delays, gate delays, etc. 00:28:49 zzo38: I don't think that's a good idea. hardware guys tried that, but it turns out it only works if you assume the software guys can write magic compilers that can guess how cached each memory load actually is and the people who write the high level code want to annotate their C code with lots of hints, 00:29:14 and even then the machine code will be very verbose and you can't cache it properly. 00:29:35 out-of-order execution and register renaming and speculative branch prediction works well. 00:29:58 the way to think about modern x86 machine code is as a compression scheme for what's actually run 00:30:03 and one that isn't very good, at that 00:30:24 I don't like it, as then you can't know what order it is in, and so on. MMIX has explicit branch prediction; you must specify whether you want a branch or not-branch to be faster. 00:30:28 the machine models probably should stay the same, but we should move to asynchronous circuits. 00:30:38 actually, the complexity of a modern core exists for a more fundamental reason 00:30:51 because memory is getting slower 00:30:53 Then use a better compression scheme, I suppose? 00:31:13 most of the time the bottleneck is either memory access when people write cache-unfriendly code or decoding time when people write cache-friendly code. it's less common that the bottleneck is mispredicted jumps which is the only case when reducing the depth of the pipeline would actually help. 00:32:16 If I want caching I should to explicitly write in the caching instead. 00:32:40 zzo38: tell that to stupid programmers who insist on using large arrays of 64-bit pointers everywhere when large arrays of 32-bit array indexes would work. 00:32:48 since memory is getting slower, there is nothing else for your doubling transistor counts to do other than to reorder more loads and stores or to cram more ways to use that memory bandwidth into the instruction decoder 00:32:57 zzo38: it's not "if I want caching". you almost always want caching. 00:33:42 I don't want to complicate it. You don't need so huge number of transistor and so slow memory; make faster memory then. 00:34:04 helloily 00:34:34 Jafet: I have said this a few times, but what would IMO help a lot is if the cpu and OS people together found a way to increase the minimum page size from 4k, because then we could have more L1 cache, but it only works globally and some software assumes the page size is fixed, so it's really hard to do without breaking compatibility with everything 00:34:34 we have. 00:35:40 zzo38: memory throughput is plenty fast, especially if you're willing to buy expensive hardware. you can't have faster memory in latency though, because the main memory is between 0.1 to 0.3 meters from the cpu pyhsically, so the signal takes several clock cycles to propagate 00:35:52 what does the page size have to do with it? 00:36:20 that's why we need lots of on-board caches, in three levels (L3 for the whole chip, L2 per core or per two cores, and L1 with very slow latency really close to the execution units) 00:37:39 Jafet: basically L1 cache wants to have very low latency, so it has to guess which cache slot holds your memory before it can look up the physical address in the page table cache (aka TLB = translation lookaside buffer), then verify that the address matches what the cache entry caches. 00:38:41 So the L1 cache can only use the low 14 bits of the address, and it practically can't have more than 8 entries for any one address because then managing it would be too slow. So L1 caches have been topped out at 32 kilobytes (8 page sizes) of data cache and 32 kilobytes of code cache per core for half a decade now. 00:38:48 -!- sebbu has quit (Ping timeout: 240 seconds). 00:38:49 I still think there is a way to do though, by having separate addressing for the cache 00:39:18 And put the memory in the processor itself also microcode, that you can program your own microcode too, to improve the speed. 00:39:38 All cpus have that much L1 cache, but none can have more. To fix this, either you need larger page sizes, or some even more incompatible change. 00:40:14 I am not so concerned if C codes will run extremely fast, since you can write it in assembly language if you want to code for specifically this computer. 00:40:20 zzo38: the memory IS practically in the processor. processors have like 380 megabytes of L3 cache, and it keeps increasing. you get memory outside of the cpu too because most people want even more memory than that. 00:41:32 Yes, but you could have separate addressing for them. 00:41:56 yeah. sliding memory windows.. 00:42:07 -!- LKoen has quit (Quit: “It’s only logical. First you learn to talk, then you learn to think. Too bad it’s not the other way round.”). 00:42:14 you could run a linux system off of cache alone these days. 00:42:17 zzo38: why would that be worth? the program can't tell in advance which memory will be in L3 because that depends a lot on the multitasking, and spilling from L3 to memory doesn't really add much overhead anyway. 00:42:39 note that for the machines that have 380 megabytes of L3, it takes almost as long to access as the dram 00:42:52 imode: I think motherboards don't really support that, but that doesn't matter, because slow RAM chips are cheap anyway, so you can just put some in. 00:42:56 Design it so that the program does know in advance, because only what the program puts there will be there. 00:43:02 at least for chipsets that I know of 00:43:16 Jafet: no, not really. not in latency. It's still five times closer physically than the main memory. 00:43:25 hi wob_jonas 00:43:29 hi shachaf 00:43:37 I ate lángos the other day. 00:43:45 shachaf: go on 00:43:52 That's pretty much it. 00:43:59 Do you eat it sometimes? 00:44:07 You may even add parallel memory transfer if you want to, and then you can only address the cache, and not the external memory. 00:44:09 no, I don't much like it 00:46:40 I don't like food that's soggy with fat. That mostly comes up with ways to prepare meat, but lángos is an example too. 00:47:18 It was very deep-fried. 00:48:09 Exactly. 00:48:14 Doesn't change much. 00:48:30 Is there other Hungarian food I should try? 00:48:34 here's a diagram claiming 40ns for the SB-E interconnect https://mechanical-sympathy.blogspot.de/2013/02/cpu-cache-flushing-fallacy.html 00:49:01 although I'm not sure if I should believe 65ns dram 00:49:06 shachaf: I can't predict what you'd like. 00:49:16 are you in Hungary or close somewhere? 00:49:47 or just ate lángos in Norway? I've seen such sold in Sweden, though I can't tell how authentic they are. 00:49:50 No. 00:49:54 It was in Oakland, CA. 00:50:02 I don't know how authentic it was, or how to measure that. 00:51:32 relatedly, the mill architecture videos were p.g. 00:52:25 Did you watch them all? 00:52:25 In any case, even if better architecture is possible, I care about x86_64 only, because it has the best support: most of the powerful computers have it, including anything I'll buy, and there's lots of good tools like optimizing compilers and good documentation. 00:52:33 You should go work for the Mill folks. 00:54:02 No, x86_64 is too complicated and too confusing; MMIX is better, and the original x86 is also good, and also MIX, and also 6502. 00:54:13 I did watch them all, but the plot twist in 10 was perhaps worth it 00:54:36 zzo38: Do you like the Mill? 00:55:03 Jafet: 10? 00:55:09 I looked but was unable to find the proper document of it 00:55:26 zzo38: I think the best documentation is in video form unfortunately. 00:55:30 Did you watch the videos? 00:55:33 No 00:56:16 There are a lot of people that start projects about fancy new cpu architectures, but actually making good optimized and well-tested cpu hardware and supporting software like optimizing compilers is pretty hard, so I don't think those projects make any sense. 00:56:39 Only a big company like Intel or AMD has the resource to be competitive in it. 00:56:41 You can write the program in assembly language, though. 00:57:12 huh, video 10 (“compiler”) is not the tenth on the website 00:57:25 And while it's easy to criticise Intel, and they do make mistakes, they are actually doing a pretty good work overall IMO. 00:57:28 well, it was that one 00:57:31 Hmm, what's the twist in that video? 00:58:22 the true nature of the mill is revealed in the end, in the q&a session I think 00:58:38 zzo38: only if you don't care about the lots of existing software written in C and other compiled languages that you'll want to run and want them to perform fast, such as the linux kernel itself 00:58:59 Which true nature? I watched the video but it was a while ago. 01:00:04 and don't even try to say you'll just have two different cpus side by side, because it turns out, if you want to do thousands of operating system calls per second and low latency networking and stuff like that, that just doesn't work. 01:00:25 Of course I will likely want the programs to run, but I can do without them going fast if making them fast means making a lot of confusion and complication, and instead write assembly languages programs when wanting to make a faster program specifically for this computer. This is always the case anyways; you will want to write assembly language program hand optimize for space and speed taking advantage of the specific features of this computer. 01:00:49 For example, you might use different kinds of data structures for the version of the program for different computers, too. 01:01:17 zzo38: for some programs, you can get away with running slow. but you won't rewrite the linux kernel and all the hardware drivers. there's a lot of work going into that project. 01:01:19 Or one version might omit some check that is needed on another implementaiton. Or whatever. 01:01:23 shachaf: something about how the belt is really just a better register map 01:01:32 but I don't remember clearly either 01:01:40 Jafet: Ah, I vaguely remember something like that. 01:01:47 Different computers will have different interfacing with hardware anyways. 01:02:02 I talked to someone who worked at Intel about it and they were a bit dubious about the hardware implementation of it. 01:02:09 But I don't really know much about it. 01:02:13 zzo38: Do you like the Mill's instruction encoding? 01:02:20 I don't know how it is work 01:02:22 zzo38: There are two instruction pointers, one moving forward and the other moving backward. 01:03:48 My own idea a bit different, there is a microcode memory (with RAM and ROM), that you can load VLIW microcode into, and uses entirely different addressing (and even different number of bits) from the external memory, is one thing that it does. There are others too. 01:04:15 zzo38: Do you like the Mill's belt? 01:04:16 Programs can load their own self-modifying microcodes. 01:04:56 Is self-modifying code worth the trouble? 01:05:08 shachaf: Again, I don't know how it is working; you will need to explain them if I am to answer such questions 01:05:26 I think self-modifying code is worth the trouble; I do not see why not. 01:05:53 zzo38: The belt is like a stack, except it's a queue of some bounded size. 01:05:56 As long as the CPU execution is defined precisely and unambiguously, then you have compatibility. 01:06:32 Instructions push their results onto the belt, and refer to belt positions by index (i.e. how recently a value was pushed). 01:07:11 When values fall off the end of the belt, they disappear. 01:07:31 Actually I have done stuff like that before, so yes I do understand. 01:08:52 people try that, but IMO it doesn't work. ostensibly it saves a few bits in the encoding, but nothing else (the register renamer isn't actually a bottleneck EVERY, register reads and writes sometimes are but a belt doesn't help) and your code gets much harder to write when you need to store registers for longer or need conditions or loops. 01:09:13 it does sound like a good idea, I've thought about it, but I don't think it works. 01:10:20 as I understand it, the main point of the belt is that you get to use 512 registers instead of 16 01:10:54 how does that work? don't you still need to encode all the input registers in the code explicitly, even if you can omit the output register most of the time? 01:11:48 we can have a plain large register array (like the one with 32 vector registers in future x86) or a large MMIX-like register stack if we just want more registers 01:12:17 (and that's 32 vector registers, plus the same 16 index registers you've always had) 01:12:17 well, the output registers are always fixed, so leaving that out does free up some bits 01:12:30 -!- augur has joined. 01:12:44 (and if you want even more, you can save index registers into fields of vector registers, and also efficiently use the stack) 01:12:48 but the videos didn't go into any detail about the instruction encoding 01:12:55 wob_jonas: golfing languages have experimented with different registerish things quite a bit 01:12:59 I just don't believe it saves much 01:13:12 I think the optimum is to have some way to have very cheap, short-lived local values but also separate storage for longer-lived values 01:13:23 a Mill-like belt is good at the former but not the latter 01:13:59 ais523: after a limit, golfing doesn't help. modern x86 extensions actually are somewhat less golfed than old x68 used to be, because being able to decode the instructions quickly is more important. so they actually have a lot of unused bits in instructions in EVEX encoding. 01:14:28 wob_jonas: well, golfing helps in that it reduces cache pressure 01:14:31 Sure, compact code still matters, but extreme golfing isn't always good. 01:14:36 I prefer how MMIX is doing actually, although I can also think of a few other ideas about how to do 01:15:02 and the thing about a highly golfed instruction set is that there's more scope to improve its performance as processors get better 01:15:09 and you lose a lot in expressability 01:15:14 REX encoding annoys me, it's so verbose, and yet you have to use it for basically everything on x86_64 01:15:30 -!- doesthiswork has joined. 01:16:53 ais523: nah, the double prefixes (0xF0 and one more prefix byte) for old SSE code is much worse, but it was necessary for easy decoding 01:17:15 I still think modern x86 is too messy, and modern ARM is also too messy. 01:17:17 but it get somewhat better with the later extensions (AVX code and AVX512 code) 01:17:55 zzo38: sure, there's some historical craft, but a lot of it is pushed out to where it doesn't actually impact the performance if you don't use it 01:18:34 I am not talking about historical stuff, but about the new stuff. 01:18:42 the mill speaker was going on about small loops very often, so I don't think his goal was to improve instruction set density 01:20:15 zzo38: some of it is messy, but they are getting better in the design than they used to. AVX512 actually manages to avoid the AVX stupidity o(where vector registers got split to 16 bit and 32 bit) which was only done to make it easier for operating systesm 01:21:18 you only affect whole registers now, which by the way means you can't have callee-saved registers because they can't save the upper part if the register is ever extended to 1024 bytes, but that ship has sailed with AVX2 anyway already 01:21:22 he had a plan to pipeline loops with nullable values, though (which can be implemented in a conventional CPU) 01:21:23 or with AVX 01:21:38 the vector registers all have to be scratch except for the lower 128 bits of four of them 01:22:03 Jafet: we have efficient conditional move instructions for that 01:23:05 it took us quite a while, they should have added them long ago, so sadly you still have to feature test for them on x86_64, but still, they are there in all currently used cpus 01:23:09 zzo38: even the original 8086 was pretty messy 01:23:17 x86 must be the worst popular asm 01:23:34 What's a good popular asm? 01:23:41 6502. 01:23:47 ais523: yes, and it already had stupid historic cruft for marketing reasons 01:24:02 6502 isn't popular but it is fairly good for the scale of processor it's on 01:24:27 ais523: Yes, but still not quite as messy as the modern kind 01:24:29 you still have to move a value, though, and it could cause an instruction in the pipeline to trap? 01:25:02 6502's pipeline is /very/ short 01:25:06 I do like 6502 though, as well as MMIX 01:25:33 6502 was good when it was new. but it's just not modern. 01:26:08 ARM then. AVR. MIPS. choose one, they all suck in many ways. :P 01:26:36 Is why, I will prefer MMIX than ARM, AVR, MIPS 01:27:07 I think 8080 seems nice. 01:27:14 What's all the hype about 8086? 01:27:28 historic reasons. 01:30:22 tswe_tt: historical 8086 isn't important, except historically as in it has a lot of successors that have inherited some decisions from it that made sense at the time but are hard to support now and take a ton of time to get rid of them. modern x86_64 matters because it's the best supported high-performance cpu there is on the market now, with good 01:30:22 hardware and software, both well-tested and high performance. 01:36:30 `? charizard 01:36:56 wait, where's HackEgo. fungot, what did you do with HackEgo? 01:36:56 wob_jonas: yes i think everybody's just afraid i think now 01:36:59 when people say that arm's instruction set is good, are they referring to a subset that does not include thumb, thumb2, jazelle, neon, virtualization, or mov pc 01:38:23 all of the damn embeddings I've seen for binary trees have been in hypercubes, and they all waste one bit of space. 01:38:53 one bit per tree? 01:38:56 `? bulbasaur 01:39:07 one bit per path to a node. 01:40:02 it seems that I can't escape paying one bit.. 01:40:25 Can you prove it? 01:40:38 certainly trying to. 01:41:19 you can encode any path from the root of an N level full binary tree to any of its leaves in N bits, but you can't encode a partial path. 01:41:44 unless I'm clinically insane, you will always have leftover bits that stand for a left traversal if left unchanged. 01:44:53 ah, the wonderful world of small-space information-theoretic lower bounds 01:45:11 you have to waste _at least one bit_ to mark the start of a valid sequence of branches. 01:45:45 doesn't an n-level binary tree have 2^n-1 nodes? 01:46:59 if it's balanced 01:47:20 hm. 01:50:57 Jafet: I'm trying to avoid integer arithmetic. calculating parent paths is not beneficial if you're dealing with paths through 1024-level binary trees or larger. 01:51:11 this is actually faster. 01:51:45 -!- augur has quit (Remote host closed the connection). 01:52:28 the problem reduces to "how do I store the length of a bit vector without storing the length of a bit vector." :P 01:55:12 -!- augur has joined. 01:55:29 -!- boily has quit (Quit: ARTICULATED CHICKEN). 01:59:02 I believe that computers do integer arithmetic in binary 01:59:49 that they do. but I would rather not implement arbitrarily large binary numbers just to store large paths. 02:00:08 now, arbitrarily large bitvectors on the other hand, that I can get behind. 02:00:09 But for what kind of computer? 02:01:50 Let's see what the weather forecast says. Does the weather cool down after this rain and storm and cold front? 02:02:32 if you store all paths with the same number of bits, then you do not need any extra bits 02:03:43 yeah, you do. if you want to store a path in a byte, you're going to store lefts as 0's, and rights as 1's. the path 101 is really 10100000, which is not the path you intended. 02:04:36 A bit, but not enough. It will warm up again. Damn. 02:04:59 -!- wob_jonas has quit (Quit: http://www.kiwiirc.com/ - A hand crafted IRC client). 02:05:43 hmm, it seemed like there was an arbitrage opportunity for a bit 02:05:56 Do you like the two new loader programs for MIX that I have designed? (Both need only one card, and actually the last five character positions are unused. One is for loading programs with a specific byte size, while the other one is independent of byte size) 02:06:51 " N O6 A O4 H N ENX E K BU I OA H A. PA N D LB E AEU ABG G 9 " 02:07:20 Jafet: here's the solution to that. pad the path with however many unused bits there are - 1, then pad it with a 0. so the path 101 becomes 11110101. you march forward through the bit vector and stop after the first 0. 02:07:56 the downside is that now your paths must _always_ start with a 0. you could fenangle it to work out an extra root node from a path, but uh.. yeah. 02:10:02 @wn fenangle 02:10:02 No match for "fenangle". 02:10:13 @wn finagle 02:10:14 *** "finagle" wn "WordNet (r) 3.0 (2006)" 02:10:14 finagle 02:10:14 v 1: achieve something by means of trickery or devious methods 02:10:14 [syn: {wangle}, {finagle}, {manage}] 02:10:22 huh, til. 02:10:33 never even knew it was.. really a word. 02:11:28 it doesn't seem tricky or devious if you just number all the nodes in level order starting from 00000001 02:12:11 right. so, with that numbering, give me the path to that node. 02:12:36 then 101$ is 00001101 02:13:39 because the real idea here is storing sparse trees, where you give me a node and I assume that every node along the path is a valid one. 02:14:34 so when I say "well does node X exist?", all I have to do is run over all the stored paths and check whether a partial match exists. 02:17:17 well, storing paths in trees is a generally poor way to store trees 02:17:51 what alternative would you give me? I just need to store the structure of the tree and query whether a given node exists. 02:19:44 I might offer a parenthetical (even a balanced one) 02:20:00 mm. sure. that's a good way to store static trees. 02:20:12 hell, I have an encoding that saves a bit. 02:20:26 but dynamic trees. 02:22:46 most trees look static to me; they sway a bit sometimes 02:22:52 lmao. 02:23:29 -!- btiffin has joined. 02:23:39 are you one of those people who graft branches, or turn them sideways? 02:24:30 I would really like to not rewrite a given bit string representing a tree every time I need to insert a node. 02:24:34 actually, I'm not sure I've seen a paper that implements tree rotations 02:24:53 they only tend to cover indels, and maybe split/merge 02:24:58 yeah. 02:25:09 I guess rotations reduce to split/merge 02:25:21 pretty much any operations imply rewriting the whole bit string. 02:25:54 I think navarro had a paper that demonstrated logarithmic indels, splits and merges 02:26:04 -!- augur has quit (Remote host closed the connection). 02:26:23 I'd rather take my chances with early matches and additive updates. the benefit to my method is that no matter how the paths arrive, the tree is final. 02:26:44 meaning I could send over the paths 000, 010, 110, 101 in any order and the tree would still be the same. 02:28:30 so I guess... I'm willing to pay the extra storage. 02:28:47 -!- contrapumpkin has quit (Quit: My MacBook Pro has gone to sleep. ZZZzzz…). 02:29:41 well, “demonstrates” might have been the wrong word, as this data structure has probably never been actually implemented 02:29:47 https://arxiv.org/abs/0905.0768 02:30:07 oh yeah, I saw this. 02:35:23 -!- augur has joined. 02:36:00 I don't think any of these bit strings are meant to be stored as bit strings 02:36:27 how so? 02:37:12 they are stored as trees, with nodes near the edge compressed into short strings to reduce size 02:37:38 mm. I'd store them as bit strings. efficient traversal, but inefficient updates. :P 02:38:06 hell, efficient storage too. 02:38:23 as long as you compress groups of about (log n)^k nodes at a time, the remaining n/(log n)^k pointers for the tree no longer prevent you from having the word “succinct“ in your paper title 02:39:22 so it's asymptotically worthwhile to interpret trees as bitstrings then store them in trees of bitstrings 02:39:59 perhaps xzibit would have been a good string algorithms researcher 02:40:44 is there a word for the relationship expressed by that meme that's more precise than just "recursion"? 02:41:13 (for reference, the "reference implementation" of that meme is "I put a car in your car so you can drive while you drive") 02:42:19 I'm not sure, but such a word could also describe the work that someone once did benchmarking nested self-interpreters 02:43:04 oh, the eigenratio website still exists 02:45:30 I've been thinking about nested self-interpreters quite a bit recently 02:45:50 trying to work out what sort of language would naturally have an eigenratio of 1 for most obvious ways of writing a self-interpreter 02:46:05 also continuations 02:52:21 if you take a recursive unit cell grid in conway's life and run it with hashlife you should technically get an eigenratio of 1 02:53:26 I'd test this, but nesting a unit cell sounds like something I'd need to generate with a script and I don't care enough 02:55:27 -!- augur has quit (Remote host closed the connection). 03:00:43 Why would you need to nest it with a script? 03:01:28 If you have some way to describe macrocells, you should just be able to do a simple substitution or something for the next level. 03:02:05 well, I would do such a substitution with a script 03:02:19 gn 03:02:21 Well, OK. 03:02:21 I'm not sure if macrocell identifiers are required to be increasing 03:02:24 -!- ATMunn has quit (Remote host closed the connection). 03:02:28 that would make it more annoying 03:02:42 Jafet: storing trees, even partial trees, via pointers is not succint, afaict. 03:02:45 Required by what? 03:03:50 -!- augur has joined. 03:04:39 apparently the nodes are numbered implicitly, so I'd have to change all the numbers when combining macrocell files 03:05:09 Is there a standard format for describing hashlife states? 03:05:40 yes, I think that format is called macrocell 03:06:03 well, not if you also want the cached results 03:08:58 Jafet: ah right, hashlife is a good example here 03:09:15 it seems that nobody wants the cached results, though, not even golly, which clears the cache every GC cycle (even for results that didn't get GC'd!) 03:11:00 ais523: now if you had a simple functional language that, unlike a really overrated CA from 1970, could express the notion of a memoizing implementation of itself 03:11:29 it's pretty easy if you're OK with programs like if (false) while (true); not terminating 03:11:36 but that's a pretty big restriction 03:15:15 so they found the unit cell and hashlife but failed to see that the resulting eigenratio is 1 03:15:49 But the main point as far as this blog goes is that "Life" has a self-interpreter, and it's eigenratio is exactly 5760! — http://eigenratios.blogspot.de/2007/09/self-interpreter-for-conways-game-of.html 03:17:49 a self-interpreter that works from finitely many starting cells would be rather more impressive :-) 03:18:04 also should be possible, and might even be possible with the same ratio 03:19:24 well, you only need to invent a fast enough breeder that lays more unit cells 03:19:39 it would probably have a larger period that 5760 though 03:22:18 if it fits within 11520 it would be fine 03:22:47 the speed of light might be the absolute speed limit in Life, but if starting from a finitely large pattern, things can't escape the pattern boundary faster than c/2 in the long term 03:26:15 Can you have a non-empty background for a finite pattern? 03:26:40 Some sort of infinite pattern that lets you communicate information more quickly. 03:27:01 I guess you would want all your patterns to preserve it. 03:30:29 well, a breeder for unit cells would likely have a six-digit period 03:32:23 (or more than six, but the unit cell seems to be made of standard parts so a fast glider synthesis shouldn't be too hard) 03:33:43 imode: a succinct data structure is just one that has less than O(n) overhead 03:34:40 not O(n), o(n). 03:35:24 this generously includes trees with O(n/(log n)^(1+ε)) pointers of O(log n) bits each 03:35:33 again, that is not the lower bound. 03:35:54 ꙮ(n) 03:36:42 shachaf: in general, yes, but I'm thinking about the specific case of an only-dead-cells background 03:37:56 golly supports a toroidal grid, which could be expanded to support a periodic background 03:39:08 Presumably a periodic background is reasonably easy to implement in hashlife -- you just need to change the way you grow the grid. 03:39:40 yes, though having to pad it to powers of 2 would be annoying 03:47:53 ꙮ̃(n)? 03:58:44 -!- PattuX has quit (Quit: Connection closed for inactivity). 04:01:36 I don't think multiocular O is a common piece of computational order notation 04:04:38 creationists use it to denote information lower bounds — the eyes signify irreducible complexity 04:05:04 Nor is it a common character in Cyrillic manuscripts. 04:05:27 Creationists? Is that people who use ꙮ_CREAT? 04:07:10 ꙮ̃ is used when a log gets in the eye, or perhaps a 2-by-4 05:00:26 -!- btiffin has quit (Remote host closed the connection). 05:13:57 -!- olsner_ has changed nick to olsner. 05:15:03 Apparently some Java-based HTTP client interpreted "gopher://zzo38computer.org" as a relative URI, even though clearly by its syntax it isn't. 05:15:07 golly supports one perodic background, but that's only for b0s8 rules 05:15:33 What's b0s8? 05:15:38 and where the background switches from on to off every generation 05:15:44 the parity hack doesn't really count 05:16:05 yeah, that's just an edge case 05:16:09 http://golly.sourceforge.net/Help/Algorithms/QuickLife.html 05:16:22 I think that explains it better than I could here 05:26:44 -!- xkapastel has joined. 05:31:45 What does "eigenratio" mean here? 05:33:01 Oh, b,s means born,survive 05:34:36 oh, :/ thought that page said that 05:40:37 It does. 05:43:28 zzo38: "zzo38computer.org" is technically a relative domain name; the absolute version is "zzo38computer.org." 05:43:38 however, for some reason it became standard to write URLs without the trailing dot 05:44:35 ais523: OK, but is still not a relative URI 05:44:44 Do you mean it interpreted it as ./gopher:/zzo38computer.org? 05:45:34 Yes, that is what it did, it look like 05:48:45 -!- augur has quit (Remote host closed the connection). 05:50:54 huh, so it did 05:53:44 If I get a vanity TLD, can I put an MX record on it? 05:54:55 there's no technical restriction against that 05:55:12 there might or might not be a political one (e.g. ICANN only agreeing to sell you the TLD if you don't host anything on the TLD directly) 05:55:32 or, well, it's a known fact as to whether or not there's a political restriction, but not known by me 05:57:18 There was a URL shortener on a two-letter country TLD once. 05:57:21 But they took it down. 05:57:53 I bet lots of bad email regexps would reject a email address like that. 06:03:21 hmm https://serverfault.com/questions/154991/why-do-some-tld-have-an-mx-record-on-the-zone-root-e-g-ai 06:07:22 Aha, Jafet++ 06:10:09 I wonder if /bin/hostname should ship with a copy of this table 06:10:43 I guess that would only solve half the problem 06:12:39 Which table? 06:28:01 shachaf: a table of TLDs with strange DNS records 06:29:00 that would be hard to update 06:30:35 seems that ai. no longer has an MX record, though it still has A, NS, and a conspicious lack of SOA 06:30:39 -!- erkin has joined. 06:44:08 -!- erkin has quit (Quit: Ouch! Got SIGABRT, dying...). 06:45:08 -!- newsham has quit (Ping timeout: 260 seconds). 06:49:58 Jafet: It looks like it has an MX record to me? 06:51:26 -!- FreeFull has quit. 06:51:47 Is .home a generic TLD? It would make a good email address for inquiries regarding distributed computing projects. 07:00:18 -!- erkin has joined. 07:03:49 -!- hakatashi has joined. 07:14:33 -!- newsham has joined. 07:17:27 -!- ais523 has quit (Ping timeout: 260 seconds). 07:34:53 -!- doesthiswork has quit (Quit: Leaving.). 07:49:19 -!- oerjan has joined. 07:59:10 -!- ybden has quit (Ping timeout: 240 seconds). 08:01:45 -!- ybden has joined. 08:12:57 -!- erkin has quit (Read error: Connection reset by peer). 08:13:29 -!- erkin has joined. 08:21:33 no (but homes and house are) 08:21:43 Wikipedia says "BT hubs use the top-level pseudo-domain home for local DNS resolution of routers, modems and gateways." 08:22:48 https://en.wikipedia.org/wiki/List_of_Internet_top-level_domains 08:26:08 -!- hppavilion[1] has quit (Ping timeout: 240 seconds). 08:41:56 -!- tuu has joined. 08:59:41 -!- sebbu has joined. 09:44:58 -!- imode has quit (Ping timeout: 276 seconds). 10:04:56 -!- oerjan has quit (Quit: Later). 10:05:14 <\oren\> look at this shit http://imgur.com/k86avnF 10:05:25 <\oren\> How is this allowed? 10:07:58 <\oren\> I think an instersection between 6 or more streets should be required to be a roundabout 10:25:07 -!- mroman has joined. 10:25:10 rello. 10:25:22 esolangs.orgc is down. 10:26:21 -!- xkapastel has quit (Quit: Connection closed for inactivity). 11:25:59 I can't count the number of times that has been said already. 11:26:09 I do have alerting on it as well. 11:27:12 Anyway, will set up the backup thing properly once I get home from the airport and unpack a little. 11:34:01 -!- boily has joined. 11:35:41 -!- PattuX has joined. 11:48:16 :D 11:48:24 well.. I'm not pressuring you. 11:48:26 just informing you 11:48:42 it's really the least important site in my life. 11:52:08 the most important is int-e's cheap server :D 11:52:13 because it hosts the online shell. 12:03:27 -!- zseri has joined. 12:11:17 -!- jaboja has joined. 12:12:35 fungot: can you be HackEgo? 12:12:35 boily: you you can start ' em in the paper 12:12:51 * boily starts HackEgo in the paper 12:27:55 -!- boily has quit (Quit: DECLARED CHICKEN). 12:49:44 -!- zseri has quit (Ping timeout: 260 seconds). 12:54:47 -!- zseri has joined. 13:02:25 -!- heroux has quit (Ping timeout: 246 seconds). 13:02:35 -!- heroux has joined. 13:10:13 - + + + ] + > [ [ + > < > ] - > [ - ] ] [ - < - + + ] - < < - > > + < - > [ < ] + > - + ] < ] < + - < - - [ < ] > 13:27:44 stupid evolver produces stupid programs 13:27:59 hm. 13:28:13 has anybody ever done evolving html/css 13:28:16 to fit a specific design 13:38:02 -!- Labeo has joined. 13:42:20 -!- Labeo has quit (Quit: Mutter: www.mutterirc.com). 13:44:31 -!- Labeo has joined. 13:47:41 -!- LKoen has joined. 13:48:31 -!- Labeo has quit (Client Quit). 14:00:30 -!- doesthiswork has joined. 14:01:38 -!- Labeo has joined. 14:03:14 -!- mroman has quit (Ping timeout: 260 seconds). 14:06:55 -!- ais523 has joined. 14:13:54 -!- Labeo has quit (Quit: Mutter: www.mutterirc.com). 14:16:11 -!- erkin has quit (Ping timeout: 255 seconds). 14:22:54 -!- atslash has joined. 14:27:12 -!- erkin has joined. 14:27:57 -!- jaboja has quit (Ping timeout: 260 seconds). 14:32:07 -!- Mayoi has joined. 14:32:14 -!- erkin has quit (Disconnected by services). 14:37:42 -!- zseri has quit (Quit: Page closed). 14:39:40 -!- ais523 has quit (Remote host closed the connection). 14:40:50 -!- ais523 has joined. 14:42:09 -!- `^_^v has joined. 14:48:27 -!- LKoen has quit (Remote host closed the connection). 14:52:19 -!- tuu has quit (Remote host closed the connection). 14:59:41 -!- jaboja has joined. 15:03:26 -!- doesthiswork has quit (Quit: Leaving.). 15:05:25 -!- __kerbal__ has joined. 15:05:45 <__kerbal__> Hi 15:06:34 <__kerbal__> Does anyone know exactly what is wrong with the wiki? 15:07:25 no, we only heard that question like a dozen times the last hours 15:11:40 <__kerbal__> Probably CaC's fault again 15:16:22 -!- ATMunn has joined. 15:16:22 -!- ATMunn has quit (Changing host). 15:16:22 -!- ATMunn has joined. 15:18:10 -!- jaboja has quit (Ping timeout: 240 seconds). 15:21:11 <__kerbal__> https://www.youtube.com/watch?v=HuCJ8s_xMnI 15:21:20 <__kerbal__> One of the weirdest videos I've seen in a while 15:29:43 -!- Mayoi has quit (Quit: Ouch! Got SIGABRT, dying...). 15:42:12 -!- Bowserinator has quit (Excess Flood). 15:42:22 -!- Bowserinator has joined. 15:42:45 -!- Bowserinator has changed nick to Guest82305. 15:43:42 -!- augur has joined. 15:43:57 -!- __kerbal__ has quit (Quit: Page closed). 15:47:59 -!- augur has quit (Ping timeout: 255 seconds). 15:48:05 Heh, division is weird. You could consider multiplication its "opposite", but considering modulo its opposite also makes sense. :P 15:50:47 -!- contrapumpkin has joined. 15:57:56 -!- Guest82305 has changed nick to Bowserinator. 15:57:57 -!- Bowserinator has quit (Changing host). 15:57:57 -!- Bowserinator has joined. 15:58:45 so uh, can someone exlpain funge-98's stack stack to me? im having trouble understanding the commands it uses 16:01:34 -!- ais523 has quit (Remote host closed the connection). 16:02:44 -!- ais523 has joined. 16:23:18 -!- LKoen has joined. 16:46:20 -!- Lord_of_Life has quit (Remote host closed the connection). 16:59:35 -!- LKoen has quit (Remote host closed the connection). 17:15:05 Concept: like the "break n;" idea, but with returning values. "return<2> x;", for example, would return x and force the function that called it to immediately return x too. 17:17:07 that would break encapsulation a lot 17:17:24 -!- Lord_of_Life has joined. 17:17:31 exactly. 17:17:57 -!- LKoen has joined. 17:22:04 -!- AnotherTest has joined. 17:29:43 -!- LKoen has quit (Remote host closed the connection). 17:38:00 -!- FreeFull has joined. 17:38:12 -!- LKoen has joined. 17:39:31 -!- erkin has joined. 17:41:32 -!- AnotherTest has quit (Read error: Connection reset by peer). 17:41:51 -!- AnotherTest has joined. 17:44:27 -!- augur has joined. 17:45:55 -!- augur has quit (Remote host closed the connection). 17:47:13 -!- augur has joined. 17:58:44 -!- zseri has joined. 18:02:13 -!- AnotherTest has quit (Ping timeout: 276 seconds). 18:20:31 -!- AnotherTest has joined. 18:25:28 -!- LKoen has quit (Remote host closed the connection). 18:38:39 -!- AnotherTest has quit (Ping timeout: 255 seconds). 18:45:53 -!- imode has joined. 18:48:21 -!- AnotherTest has joined. 19:03:13 rdococ: that operation exists in INTERCAL 19:03:22 in fact, it's the only way to do flow control in INTERCAL-72 19:11:34 -!- erkin has quit (Quit: Ouch! Got SIGABRT, dying...). 19:13:38 -!- ais523 has quit (Ping timeout: 240 seconds). 19:35:08 -!- AnotherTest has quit (Ping timeout: 240 seconds). 19:40:42 -!- AnotherTest has joined. 19:48:41 -!- LKoen has joined. 20:01:09 -!- LKoen has quit (Remote host closed the connection). 20:01:29 -!- wob_jonas has joined. 20:02:04 ais523: I don't think that's the same. Intercal has multi-level return, that is, it can pop multiple entries from the return stack and return to the last one popped. 20:02:45 how is this different? 20:02:47 You could actually emulate that in GW-BASIC, which has a form of the RETURN statement that pops the return stack but jumps to a constant line in the statement. 20:03:04 myname: I think the original question was a multi-level break. As in, from while or for or do-while loops 20:03:35 it was a multi-level return like the existing multi-level break 20:03:54 oh, that was the question? 20:03:59 sorry then 20:04:02 I didn't follow 20:07:45 -!- AnotherTest has quit (Ping timeout: 248 seconds). 20:08:30 -!- AnotherTest has joined. 20:09:55 -!- oerjan has joined. 20:14:35 Do you know when to expect fixing esolang wiki? 20:15:37 wob_jonas: Yes, I have used that before, using RETURN to jump to a different line number (and have used it once to RETURN to the next line which is a RETURN to a constant line number, even) 20:17:32 -!- AnotherTest has quit (Ping timeout: 255 seconds). 20:19:35 -!- AnotherTest has joined. 20:19:40 PHP has multi-level break by number, while JavaScript has multi-level break by name. (Although I happen to think goto would be a better way of doing this anyways; you don't need much more than the single-level break/continue, as well as goto) 20:21:00 zzo38: Just throw in callCC and call it a day 20:21:40 callIAD 20:22:34 :( why are there no good befunge-98 interpreters for windows 20:22:57 because developers don't use windows? 20:23:41 yeah i guess 20:23:51 there's not even any good online ones :( 20:24:04 there are 20:24:21 at least, i havent found any 20:24:31 i don't know which one anymore, though 20:24:38 i used to have one modified 20:25:10 actually there are way more developers who use windows than developers who use befunge 20:28:14 One of the features of NNIX is that the file number has to be a constant and it does not support variable file numbers. Do you know why? 20:29:18 zzo38: because it's just a toy OS interface that's enough for the examples in the book, not a real complete operating system? 20:29:37 I think it should be fixed 20:30:13 zzo38: it also doesn't support modifying an already written file without erasing its data first, that's much more annying IMO 20:30:45 (and that's despite that the book claims it supports everything the file interface of C89 can do except remove files) 20:31:25 but in any case, the OS interface is extensible, an OS could add new system calls, it doesn't intend to be complete and closed like the CPU architecture itself 20:32:29 Yes, that is another think to fix. There are a few other things too, such as adding file control interface, and perhaps a convenience function for reading/writing one value to/from $255. 20:32:42 myname: also, at some point ill get linux, but for now im stuck on windows so i have no choice but to use a windows or browser based one :/ 20:32:48 what do you mean "file control interface"? 20:33:02 Similar to fcntl() 20:33:26 (Although you don't need all of the functions of fcntl) 20:34:38 Also similar to ioctl() for some devices 20:35:57 again, he only needs a little of OS interface for his examples. he did say he doesn't intend to create a full operating system. 20:36:01 wob_jonas: hah, I actually kind of like that thing about not allowing modification of files after the fact. 20:36:11 if you want a full OS, just imagine a unix-like running on MMIX 20:36:37 Cale: you can modify files, it's just you can only do so if you do the equivalent of O_TRUNC 20:36:45 aw 20:36:49 Actually fcntl() probably isn't needed, but a few of the controls of ioctl() may be, mainly the terminal controls. 20:39:48 One possible way that could be done is to add additional command-line arguments to the simulator to load .so files assigned to different X values in the TRAP instructions, where 0 means to use the built-in stuff. 20:40:16 That way you could add one extension for connecting to the X server, one extension for music, and so on 20:49:26 I'm trying to rig up some method to photograph a book, for which I need to hold both the book and the camera in place. But I am failing miserably, because I'm really bad at hardware stuff, and don't have many things to use at home. 20:51:05 (and it shouldn't obstruct lighting, which is basically impossible since I want to get the camera close to the book) 20:51:27 -!- AnotherTest has quit (Ping timeout: 240 seconds). 20:52:12 -!- AnotherTest has joined. 21:05:48 -!- AnotherTest has quit (Ping timeout: 240 seconds). 21:06:48 -!- AnotherTest has joined. 21:18:40 wob_jonas: There are scanners which can feed a stack of pages through, if you don't mind destroying the book 21:20:33 There was one point when I was in highschool where I helped a friend of mine make a digital version of his mother's cookbook, and we cut the ring binding off a copy and fed it through a scanner, and then OCR'ed it... and OCR was terrible back then, so I had a lot of hand-editing to do. :P 21:21:55 Still probably amounted to less work than typing out the whole book though 21:24:31 Cale: I don't want to destroy the book. A flatbed scanner is a good idea in general, and I did think of it, 21:24:39 and I could flip pages manually 21:25:37 but the problem is that the scanner I have access to has a maximum scan area of only slightly bigger than A4, and this book is bigger than that. The page content might just barely fit in that area, but I couldn't position it right. 21:25:41 I'd need a larger scanner. 21:26:28 Although destroying a book isn't such a bad idea actually. I didn't think of that. I can't destroy this library copy, but I might be able to locate a new or used copy of this that I can destroy. 21:26:52 That would make this somewhat easier, because then I only have to position the individual pages, but it's still not easy 21:27:01 Let me check online for the price 21:28:06 -!- xkapastel has joined. 21:28:35 the easiest way to mount a normal digital camera is by the mount on the bottom, usually 21:29:01 I think it's a 1/4-20 thread, you could bolt it to something solid above the book 21:30:17 Hmm, it's not expensive. I could buy and destroy a copy. 21:30:26 I'd still have to figure out how exactly to photograph or scan it though. 21:35:34 Is there somewhere I can just borrow a flatbed scanner larger than A4? 21:40:32 what if you scanned the pages in two parts sideways and stitched them back together 21:40:44 print shops would probably have a big scanner 21:40:49 not sure what else would 21:40:53 Hoolootwo: would be hard to stitch them accurately 21:41:01 and to position the pages accurately that way 21:41:04 is there not software for that? :/ 21:41:15 Yeah, but still 21:41:16 I guess automatedly doing it is probably hard 21:41:29 It would be much better if scanned together 21:41:38 yeah, definitely 21:42:01 The image quality matters here. If it didn't, I'd just shoot the pictures with the camera handheld and be done with it. 21:42:37 with scanning, I think you have plenty of resolution 21:42:57 Exactly, that's why a scanner would be better 21:45:26 Apparently this print shop has A3 sized scanners I can use (for a fee obviously) 21:45:31 in 600 DPI 21:45:33 that could work 21:46:01 It's not even expensive. 21:49:12 -!- sleffy has joined. 21:49:26 Now I made up a way to make operating system interface extensions into the MMIX simulation. It is: http://sprunge.us/PAdY 21:53:09 Do you like this? 21:53:15 I think I'll order a copy of this book. 21:56:34 Ordered. 21:57:12 I'll be able to get it in a few days. Then I can decide whether I want to scan it whole or cut up. 21:57:21 Cut up would probably be more precise. 22:01:46 Cale: thanks for the cut up idea 22:02:34 -!- wob_jonas has quit (Quit: http://www.kiwiirc.com/ - A hand crafted IRC client). 22:07:38 -!- AnotherTest has quit (Ping timeout: 240 seconds). 22:08:09 -!- AnotherTest has joined. 22:11:22 -!- LKoen has joined. 22:12:42 -!- hppavilion[1] has joined. 22:17:23 -!- `^_^v has quit (Quit: Leaving). 22:17:42 -!- `^_^v has joined. 22:20:50 -!- paul2520 has joined. 22:20:50 -!- paul2520 has quit (Changing host). 22:20:50 -!- paul2520 has joined. 22:21:43 -!- AnotherTest has quit (Ping timeout: 258 seconds). 22:22:52 -!- AnotherTest has joined. 22:27:08 -!- AnotherTest has quit (Ping timeout: 240 seconds). 22:40:16 -!- btiffin has joined. 22:42:23 -!- `^_^v has quit (Quit: This computer has gone to sleep). 22:43:57 -!- oerjan has quit (Quit: Nite). 22:51:07 -!- zseri has quit (Quit: Page closed). 22:56:44 -!- ais523 has joined. 22:59:20 -!- boily has joined. 23:14:35 `w -- Will HackEgo ever be again \\ S'enfargea-t-il dans un tranche de pain \\ I need my random wisdom \\ Peut-être est-il tombé dans les pommes? 23:14:55 https://github.com/aaronduino/asciidots/ 23:21:54 helloily 23:23:21 boily bluepommel 23:24:23 -!- LKoen has quit (Quit: “It’s only logical. First you learn to talk, then you learn to think. Too bad it’s not the other way round.”). 23:24:55 fizzie: less fizzie, more fixxie twh 23:34:53 QUINTHELLOPIA! 23:34:56 helloochaf! 23:35:16 bluepommel? 23:35:50 -!- imode has quit (Ping timeout: 240 seconds). 23:37:02 oui 23:59:27 -!- augur has quit (Ping timeout: 240 seconds).