00:03:11 -!- augur_ has joined.
00:04:46 <Sgeo_> Should I read the 80386 manual to start learning assembly?
00:04:52 <Sgeo_> OSDev wiki seems to suggest that
00:05:00 <Sgeo_> http://wiki.osdev.org/Learning_80x86_Assembly
00:05:05 -!- augur has quit (Ping timeout: 240 seconds).
00:06:17 <wob_jonas> Sgeo_: maybe. but don't forget that cpus have changed a lot since the 386. they have out-of order execution and speculative and rollback and crazy branch condition and target prediction and multi-level caches and vector registers (which are the new general registers) and 64-bit mode and all that stuff now.
00:06:56 <Jafet> not all english-speaking countries have useful libraries
00:08:44 <ais523> Sgeo_: do you want to learn how to write assembly that works or assembly that runs fast?
00:08:58 <Sgeo_> I want to understand CPUs better.
00:09:01 <ais523> assembly optimization is pretty crazy nowadays, mostly because processors no longer work anything like asm assumes they do
00:09:26 <ais523> modern x86 basically recompiles the asm into an entirely different format on the fly
00:10:00 <wob_jonas> ais523: I personally care about the x86 cpu to learn how to write compiled code (C or C++ etc) that can run fast. it's rare that you actually have to write assembly, but knowing what the cpu does and caches and stuff helps when you write compiled code.
00:11:43 <ais523> it's hard to know how to understand CPUs better nowadays
00:11:58 <ais523> I guess looking at pathological cases and understanding why they're pathological can help
00:12:14 <Jafet> I suppose you could start with agner fog's optimization manual
00:13:03 <Jafet> (namely, the microarchitecture manual)
00:13:25 -!- augur_ has quit (Read error: Connection reset by peer).
00:14:16 <wob_jonas> ais523: you don't need to understand it perfectly, but knowing something about it can help
00:20:12 <Jafet> one other approach, which is a bit more relaxing, is to read mysticial's stack overflow answers
00:20:21 <zzo38> I would want to instead design the computer better, to not be so complicated and confusing like that.
00:22:19 <wob_jonas> Some of the complicatedness have really good excuses. Like, you know, speed of light and how big atoms are.
00:24:13 <zzo38> Yes, but I think there are better ways to deal with that.
00:25:13 <imode> less speed of light and more speed of electric charge through a medium. :P
00:25:27 <imode> but I guess if you did optical computing..
00:26:19 <wob_jonas> imode: that matters only in the short deep circuits, like within the cpu. for between the main ram and cpu where there's only wires, no logic, the propagation speed is basically the speed of light.
00:27:01 <zzo38> You shouldn't put the out of order execution, complicated caching, speculation/rollback, etc. Make any stuff being doing explicily as part of the program instead.
00:27:06 <Jafet> then there are dumb complications, like how every x86 chip has three different types of floating point units
00:27:22 <imode> what? it's all subject to the speed of electric charge through a medium. transferrance of charge is NOT the speed of light.
00:28:03 <Jafet> (apparently gcc defaults to doing all floating point in sse now, if you use -ffast-math)
00:28:08 <imode> regardless of whether or not there's logic in the way. logic only adds switching delays, gate delays, etc.
00:28:49 <wob_jonas> zzo38: I don't think that's a good idea. hardware guys tried that, but it turns out it only works if you assume the software guys can write magic compilers that can guess how cached each memory load actually is and the people who write the high level code want to annotate their C code with lots of hints,
00:29:14 <wob_jonas> and even then the machine code will be very verbose and you can't cache it properly.
00:29:35 <wob_jonas> out-of-order execution and register renaming and speculative branch prediction works well.
00:29:58 <ais523> the way to think about modern x86 machine code is as a compression scheme for what's actually run
00:30:03 <ais523> and one that isn't very good, at that
00:30:24 <zzo38> I don't like it, as then you can't know what order it is in, and so on. MMIX has explicit branch prediction; you must specify whether you want a branch or not-branch to be faster.
00:30:28 <imode> the machine models probably should stay the same, but we should move to asynchronous circuits.
00:30:38 <Jafet> actually, the complexity of a modern core exists for a more fundamental reason
00:30:51 <Jafet> because memory is getting slower
00:30:53 <zzo38> Then use a better compression scheme, I suppose?
00:31:13 <wob_jonas> most of the time the bottleneck is either memory access when people write cache-unfriendly code or decoding time when people write cache-friendly code. it's less common that the bottleneck is mispredicted jumps which is the only case when reducing the depth of the pipeline would actually help.
00:32:16 <zzo38> If I want caching I should to explicitly write in the caching instead.
00:32:40 <wob_jonas> zzo38: tell that to stupid programmers who insist on using large arrays of 64-bit pointers everywhere when large arrays of 32-bit array indexes would work.
00:32:48 <Jafet> since memory is getting slower, there is nothing else for your doubling transistor counts to do other than to reorder more loads and stores or to cram more ways to use that memory bandwidth into the instruction decoder
00:32:57 <wob_jonas> zzo38: it's not "if I want caching". you almost always want caching.
00:33:42 <zzo38> I don't want to complicate it. You don't need so huge number of transistor and so slow memory; make faster memory then.
00:34:34 <wob_jonas> Jafet: I have said this a few times, but what would IMO help a lot is if the cpu and OS people together found a way to increase the minimum page size from 4k, because then we could have more L1 cache, but it only works globally and some software assumes the page size is fixed, so it's really hard to do without breaking compatibility with everything
00:35:40 <wob_jonas> zzo38: memory throughput is plenty fast, especially if you're willing to buy expensive hardware. you can't have faster memory in latency though, because the main memory is between 0.1 to 0.3 meters from the cpu pyhsically, so the signal takes several clock cycles to propagate
00:35:52 <Jafet> what does the page size have to do with it?
00:36:20 <wob_jonas> that's why we need lots of on-board caches, in three levels (L3 for the whole chip, L2 per core or per two cores, and L1 with very slow latency really close to the execution units)
00:37:39 <wob_jonas> Jafet: basically L1 cache wants to have very low latency, so it has to guess which cache slot holds your memory before it can look up the physical address in the page table cache (aka TLB = translation lookaside buffer), then verify that the address matches what the cache entry caches.
00:38:41 <wob_jonas> So the L1 cache can only use the low 14 bits of the address, and it practically can't have more than 8 entries for any one address because then managing it would be too slow. So L1 caches have been topped out at 32 kilobytes (8 page sizes) of data cache and 32 kilobytes of code cache per core for half a decade now.
00:38:48 -!- sebbu has quit (Ping timeout: 240 seconds).
00:38:49 <zzo38> I still think there is a way to do though, by having separate addressing for the cache
00:39:18 <zzo38> And put the memory in the processor itself also microcode, that you can program your own microcode too, to improve the speed.
00:39:38 <wob_jonas> All cpus have that much L1 cache, but none can have more. To fix this, either you need larger page sizes, or some even more incompatible change.
00:40:14 <zzo38> I am not so concerned if C codes will run extremely fast, since you can write it in assembly language if you want to code for specifically this computer.
00:40:20 <wob_jonas> zzo38: the memory IS practically in the processor. processors have like 380 megabytes of L3 cache, and it keeps increasing. you get memory outside of the cpu too because most people want even more memory than that.
00:41:32 <zzo38> Yes, but you could have separate addressing for them.
00:41:56 <imode> yeah. sliding memory windows..
00:42:07 -!- LKoen has quit (Quit: “It’s only logical. First you learn to talk, then you learn to think. Too bad it’s not the other way round.”).
00:42:14 <imode> you could run a linux system off of cache alone these days.
00:42:17 <wob_jonas> zzo38: why would that be worth? the program can't tell in advance which memory will be in L3 because that depends a lot on the multitasking, and spilling from L3 to memory doesn't really add much overhead anyway.
00:42:39 <Jafet> note that for the machines that have 380 megabytes of L3, it takes almost as long to access as the dram
00:42:52 <wob_jonas> imode: I think motherboards don't really support that, but that doesn't matter, because slow RAM chips are cheap anyway, so you can just put some in.
00:42:56 <zzo38> Design it so that the program does know in advance, because only what the program puts there will be there.
00:43:02 <Jafet> at least for chipsets that I know of
00:43:16 <wob_jonas> Jafet: no, not really. not in latency. It's still five times closer physically than the main memory.
00:43:37 <shachaf> I ate lángos the other day.
00:44:07 <zzo38> You may even add parallel memory transfer if you want to, and then you can only address the cache, and not the external memory.
00:46:40 <wob_jonas> I don't like food that's soggy with fat. That mostly comes up with ways to prepare meat, but lángos is an example too.
00:48:30 <shachaf> Is there other Hungarian food I should try?
00:48:34 <Jafet> here's a diagram claiming 40ns for the SB-E interconnect https://mechanical-sympathy.blogspot.de/2013/02/cpu-cache-flushing-fallacy.html
00:49:01 <Jafet> although I'm not sure if I should believe 65ns dram
00:49:06 <wob_jonas> shachaf: I can't predict what you'd like.
00:49:16 <wob_jonas> are you in Hungary or close somewhere?
00:49:47 <wob_jonas> or just ate lángos in Norway? I've seen such sold in Sweden, though I can't tell how authentic they are.
00:50:02 <shachaf> I don't know how authentic it was, or how to measure that.
00:51:32 <Jafet> relatedly, the mill architecture videos were p.g.
00:52:25 <wob_jonas> In any case, even if better architecture is possible, I care about x86_64 only, because it has the best support: most of the powerful computers have it, including anything I'll buy, and there's lots of good tools like optimizing compilers and good documentation.
00:52:33 <shachaf> You should go work for the Mill folks.
00:54:02 <zzo38> No, x86_64 is too complicated and too confusing; MMIX is better, and the original x86 is also good, and also MIX, and also 6502.
00:54:13 <Jafet> I did watch them all, but the plot twist in 10 was perhaps worth it
00:54:36 <shachaf> zzo38: Do you like the Mill?
00:55:09 <zzo38> I looked but was unable to find the proper document of it
00:55:26 <shachaf> zzo38: I think the best documentation is in video form unfortunately.
00:56:16 <wob_jonas> There are a lot of people that start projects about fancy new cpu architectures, but actually making good optimized and well-tested cpu hardware and supporting software like optimizing compilers is pretty hard, so I don't think those projects make any sense.
00:56:39 <wob_jonas> Only a big company like Intel or AMD has the resource to be competitive in it.
00:56:41 <zzo38> You can write the program in assembly language, though.
00:57:12 <Jafet> huh, video 10 (“compiler”) is not the tenth on the website
00:57:25 <wob_jonas> And while it's easy to criticise Intel, and they do make mistakes, they are actually doing a pretty good work overall IMO.
00:57:28 <Jafet> well, it was that one
00:57:31 <shachaf> Hmm, what's the twist in that video?
00:58:22 <Jafet> the true nature of the mill is revealed in the end, in the q&a session I think
00:58:38 <wob_jonas> zzo38: only if you don't care about the lots of existing software written in C and other compiled languages that you'll want to run and want them to perform fast, such as the linux kernel itself
00:58:59 <shachaf> Which true nature? I watched the video but it was a while ago.
01:00:04 <wob_jonas> and don't even try to say you'll just have two different cpus side by side, because it turns out, if you want to do thousands of operating system calls per second and low latency networking and stuff like that, that just doesn't work.
01:00:25 <zzo38> Of course I will likely want the programs to run, but I can do without them going fast if making them fast means making a lot of confusion and complication, and instead write assembly languages programs when wanting to make a faster program specifically for this computer. This is always the case anyways; you will want to write assembly language program hand optimize for space and speed taking advantage of the specific features of this computer.
01:00:49 <zzo38> For example, you might use different kinds of data structures for the version of the program for different computers, too.
01:01:17 <wob_jonas> zzo38: for some programs, you can get away with running slow. but you won't rewrite the linux kernel and all the hardware drivers. there's a lot of work going into that project.
01:01:19 <zzo38> Or one version might omit some check that is needed on another implementaiton. Or whatever.
01:01:23 <Jafet> shachaf: something about how the belt is really just a better register map
01:01:32 <Jafet> but I don't remember clearly either
01:01:40 <shachaf> Jafet: Ah, I vaguely remember something like that.
01:01:47 <zzo38> Different computers will have different interfacing with hardware anyways.
01:02:02 <shachaf> I talked to someone who worked at Intel about it and they were a bit dubious about the hardware implementation of it.
01:02:09 <shachaf> But I don't really know much about it.
01:02:13 <shachaf> zzo38: Do you like the Mill's instruction encoding?
01:02:20 <zzo38> I don't know how it is work
01:02:22 <shachaf> zzo38: There are two instruction pointers, one moving forward and the other moving backward.
01:03:48 <zzo38> My own idea a bit different, there is a microcode memory (with RAM and ROM), that you can load VLIW microcode into, and uses entirely different addressing (and even different number of bits) from the external memory, is one thing that it does. There are others too.
01:04:15 <shachaf> zzo38: Do you like the Mill's belt?
01:04:16 <zzo38> Programs can load their own self-modifying microcodes.
01:04:56 <shachaf> Is self-modifying code worth the trouble?
01:05:08 <zzo38> shachaf: Again, I don't know how it is working; you will need to explain them if I am to answer such questions
01:05:26 <zzo38> I think self-modifying code is worth the trouble; I do not see why not.
01:05:53 <shachaf> zzo38: The belt is like a stack, except it's a queue of some bounded size.
01:05:56 <zzo38> As long as the CPU execution is defined precisely and unambiguously, then you have compatibility.
01:06:32 <shachaf> Instructions push their results onto the belt, and refer to belt positions by index (i.e. how recently a value was pushed).
01:07:11 <shachaf> When values fall off the end of the belt, they disappear.
01:07:31 <zzo38> Actually I have done stuff like that before, so yes I do understand.
01:08:52 <wob_jonas> people try that, but IMO it doesn't work. ostensibly it saves a few bits in the encoding, but nothing else (the register renamer isn't actually a bottleneck EVERY, register reads and writes sometimes are but a belt doesn't help) and your code gets much harder to write when you need to store registers for longer or need conditions or loops.
01:09:13 <wob_jonas> it does sound like a good idea, I've thought about it, but I don't think it works.
01:10:20 <Jafet> as I understand it, the main point of the belt is that you get to use 512 registers instead of 16
01:10:54 <wob_jonas> how does that work? don't you still need to encode all the input registers in the code explicitly, even if you can omit the output register most of the time?
01:11:48 <wob_jonas> we can have a plain large register array (like the one with 32 vector registers in future x86) or a large MMIX-like register stack if we just want more registers
01:12:17 <wob_jonas> (and that's 32 vector registers, plus the same 16 index registers you've always had)
01:12:17 <Jafet> well, the output registers are always fixed, so leaving that out does free up some bits
01:12:30 -!- augur has joined.
01:12:44 <wob_jonas> (and if you want even more, you can save index registers into fields of vector registers, and also efficiently use the stack)
01:12:48 <Jafet> but the videos didn't go into any detail about the instruction encoding
01:12:55 <ais523> wob_jonas: golfing languages have experimented with different registerish things quite a bit
01:12:59 <wob_jonas> I just don't believe it saves much
01:13:12 <ais523> I think the optimum is to have some way to have very cheap, short-lived local values but also separate storage for longer-lived values
01:13:23 <ais523> a Mill-like belt is good at the former but not the latter
01:13:59 <wob_jonas> ais523: after a limit, golfing doesn't help. modern x86 extensions actually are somewhat less golfed than old x68 used to be, because being able to decode the instructions quickly is more important. so they actually have a lot of unused bits in instructions in EVEX encoding.
01:14:28 <ais523> wob_jonas: well, golfing helps in that it reduces cache pressure
01:14:31 <wob_jonas> Sure, compact code still matters, but extreme golfing isn't always good.
01:14:36 <zzo38> I prefer how MMIX is doing actually, although I can also think of a few other ideas about how to do
01:15:02 <ais523> and the thing about a highly golfed instruction set is that there's more scope to improve its performance as processors get better
01:15:09 <wob_jonas> and you lose a lot in expressability
01:15:14 <ais523> REX encoding annoys me, it's so verbose, and yet you have to use it for basically everything on x86_64
01:15:30 -!- doesthiswork has joined.
01:16:53 <wob_jonas> ais523: nah, the double prefixes (0xF0 and one more prefix byte) for old SSE code is much worse, but it was necessary for easy decoding
01:17:15 <zzo38> I still think modern x86 is too messy, and modern ARM is also too messy.
01:17:17 <wob_jonas> but it get somewhat better with the later extensions (AVX code and AVX512 code)
01:17:55 <wob_jonas> zzo38: sure, there's some historical craft, but a lot of it is pushed out to where it doesn't actually impact the performance if you don't use it
01:18:34 <zzo38> I am not talking about historical stuff, but about the new stuff.
01:18:42 <Jafet> the mill speaker was going on about small loops very often, so I don't think his goal was to improve instruction set density
01:20:15 <wob_jonas> zzo38: some of it is messy, but they are getting better in the design than they used to. AVX512 actually manages to avoid the AVX stupidity o(where vector registers got split to 16 bit and 32 bit) which was only done to make it easier for operating systesm
01:21:18 <wob_jonas> you only affect whole registers now, which by the way means you can't have callee-saved registers because they can't save the upper part if the register is ever extended to 1024 bytes, but that ship has sailed with AVX2 anyway already
01:21:22 <Jafet> he had a plan to pipeline loops with nullable values, though (which can be implemented in a conventional CPU)
01:21:38 <wob_jonas> the vector registers all have to be scratch except for the lower 128 bits of four of them
01:22:03 <wob_jonas> Jafet: we have efficient conditional move instructions for that
01:23:05 <wob_jonas> it took us quite a while, they should have added them long ago, so sadly you still have to feature test for them on x86_64, but still, they are there in all currently used cpus
01:23:09 <ais523> zzo38: even the original 8086 was pretty messy
01:23:17 <ais523> x86 must be the worst popular asm
01:23:34 <shachaf> What's a good popular asm?
01:23:47 <wob_jonas> ais523: yes, and it already had stupid historic cruft for marketing reasons
01:24:02 <ais523> 6502 isn't popular but it is fairly good for the scale of processor it's on
01:24:27 <zzo38> ais523: Yes, but still not quite as messy as the modern kind
01:24:29 <Jafet> you still have to move a value, though, and it could cause an instruction in the pipeline to trap?
01:25:02 <ais523> 6502's pipeline is /very/ short
01:25:06 <zzo38> I do like 6502 though, as well as MMIX
01:25:33 <wob_jonas> 6502 was good when it was new. but it's just not modern.
01:26:08 <imode> ARM then. AVR. MIPS. choose one, they all suck in many ways. :P
01:26:36 <zzo38> Is why, I will prefer MMIX than ARM, AVR, MIPS
01:27:14 <tswe_tt> What's all the hype about 8086?
01:30:22 <wob_jonas> tswe_tt: historical 8086 isn't important, except historically as in it has a lot of successors that have inherited some decisions from it that made sense at the time but are hard to support now and take a ton of time to get rid of them. modern x86_64 matters because it's the best supported high-performance cpu there is on the market now, with good
01:30:22 <wob_jonas> hardware and software, both well-tested and high performance.
01:36:56 <wob_jonas> wait, where's HackEgo. fungot, what did you do with HackEgo?
01:36:56 <fungot> wob_jonas: yes i think everybody's just afraid i think now
01:36:59 <Jafet> when people say that arm's instruction set is good, are they referring to a subset that does not include thumb, thumb2, jazelle, neon, virtualization, or mov pc
01:38:23 <imode> all of the damn embeddings I've seen for binary trees have been in hypercubes, and they all waste one bit of space.
01:39:07 <imode> one bit per path to a node.
01:40:02 <imode> it seems that I can't escape paying one bit..
01:40:38 <imode> certainly trying to.
01:41:19 <imode> you can encode any path from the root of an N level full binary tree to any of its leaves in N bits, but you can't encode a partial path.
01:41:44 <imode> unless I'm clinically insane, you will always have leftover bits that stand for a left traversal if left unchanged.
01:44:53 <Jafet> ah, the wonderful world of small-space information-theoretic lower bounds
01:45:11 <imode> you have to waste _at least one bit_ to mark the start of a valid sequence of branches.
01:45:45 <Jafet> doesn't an n-level binary tree have 2^n-1 nodes?
01:50:57 <imode> Jafet: I'm trying to avoid integer arithmetic. calculating parent paths is not beneficial if you're dealing with paths through 1024-level binary trees or larger.
01:51:11 <imode> this is actually faster.
01:51:45 -!- augur has quit (Remote host closed the connection).
01:52:28 <imode> the problem reduces to "how do I store the length of a bit vector without storing the length of a bit vector." :P
01:55:12 -!- augur has joined.
01:55:29 -!- boily has quit (Quit: ARTICULATED CHICKEN).
01:59:02 <Jafet> I believe that computers do integer arithmetic in binary
01:59:49 <imode> that they do. but I would rather not implement arbitrarily large binary numbers just to store large paths.
02:00:08 <imode> now, arbitrarily large bitvectors on the other hand, that I can get behind.
02:00:09 <zzo38> But for what kind of computer?
02:01:50 <wob_jonas> Let's see what the weather forecast says. Does the weather cool down after this rain and storm and cold front?
02:02:32 <Jafet> if you store all paths with the same number of bits, then you do not need any extra bits
02:03:43 <imode> yeah, you do. if you want to store a path in a byte, you're going to store lefts as 0's, and rights as 1's. the path 101 is really 10100000, which is not the path you intended.
02:04:36 <wob_jonas> A bit, but not enough. It will warm up again. Damn.
02:04:59 -!- wob_jonas has quit (Quit: http://www.kiwiirc.com/ - A hand crafted IRC client).
02:05:43 <Jafet> hmm, it seemed like there was an arbitrage opportunity for a bit
02:05:56 <zzo38> Do you like the two new loader programs for MIX that I have designed? (Both need only one card, and actually the last five character positions are unused. One is for loading programs with a specific byte size, while the other one is independent of byte size)
02:06:51 <zzo38> " N O6 A O4 H N ENX E K BU I OA H A. PA N D LB E AEU ABG G 9 "
02:07:20 <imode> Jafet: here's the solution to that. pad the path with however many unused bits there are - 1, then pad it with a 0. so the path 101 becomes 11110101. you march forward through the bit vector and stop after the first 0.
02:07:56 <imode> the downside is that now your paths must _always_ start with a 0. you could fenangle it to work out an extra root node from a path, but uh.. yeah.
02:10:14 <lambdabot> *** "finagle" wn "WordNet (r) 3.0 (2006)"
02:10:14 <lambdabot> v 1: achieve something by means of trickery or devious methods
02:10:14 <lambdabot> [syn: {wangle}, {finagle}, {manage}]
02:10:33 <imode> never even knew it was.. really a word.
02:11:28 <Jafet> it doesn't seem tricky or devious if you just number all the nodes in level order starting from 00000001
02:12:11 <imode> right. so, with that numbering, give me the path to that node.
02:12:36 <Jafet> then 101$ is 00001101
02:13:39 <imode> because the real idea here is storing sparse trees, where you give me a node and I assume that every node along the path is a valid one.
02:14:34 <imode> so when I say "well does node X exist?", all I have to do is run over all the stored paths and check whether a partial match exists.
02:17:17 <Jafet> well, storing paths in trees is a generally poor way to store trees
02:17:51 <imode> what alternative would you give me? I just need to store the structure of the tree and query whether a given node exists.
02:19:44 <Jafet> I might offer a parenthetical (even a balanced one)
02:20:00 <imode> mm. sure. that's a good way to store static trees.
02:20:12 <imode> hell, I have an encoding that saves a bit.
02:20:26 <imode> but dynamic trees.
02:22:46 <Jafet> most trees look static to me; they sway a bit sometimes
02:23:29 -!- btiffin has joined.
02:23:39 <Jafet> are you one of those people who graft branches, or turn them sideways?
02:24:30 <imode> I would really like to not rewrite a given bit string representing a tree every time I need to insert a node.
02:24:34 <Jafet> actually, I'm not sure I've seen a paper that implements tree rotations
02:24:53 <Jafet> they only tend to cover indels, and maybe split/merge
02:25:09 <Jafet> I guess rotations reduce to split/merge
02:25:21 <imode> pretty much any operations imply rewriting the whole bit string.
02:25:54 <Jafet> I think navarro had a paper that demonstrated logarithmic indels, splits and merges
02:26:04 -!- augur has quit (Remote host closed the connection).
02:26:23 <imode> I'd rather take my chances with early matches and additive updates. the benefit to my method is that no matter how the paths arrive, the tree is final.
02:26:44 <imode> meaning I could send over the paths 000, 010, 110, 101 in any order and the tree would still be the same.
02:28:30 <imode> so I guess... I'm willing to pay the extra storage.
02:28:47 -!- contrapumpkin has quit (Quit: My MacBook Pro has gone to sleep. ZZZzzz…).
02:29:41 <Jafet> well, “demonstrates” might have been the wrong word, as this data structure has probably never been actually implemented
02:29:47 <Jafet> https://arxiv.org/abs/0905.0768
02:30:07 <imode> oh yeah, I saw this.
02:35:23 -!- augur has joined.
02:36:00 <Jafet> I don't think any of these bit strings are meant to be stored as bit strings
02:37:12 <Jafet> they are stored as trees, with nodes near the edge compressed into short strings to reduce size
02:37:38 <imode> mm. I'd store them as bit strings. efficient traversal, but inefficient updates. :P
02:38:06 <imode> hell, efficient storage too.
02:38:23 <Jafet> as long as you compress groups of about (log n)^k nodes at a time, the remaining n/(log n)^k pointers for the tree no longer prevent you from having the word “succinct“ in your paper title
02:39:22 <Jafet> so it's asymptotically worthwhile to interpret trees as bitstrings then store them in trees of bitstrings
02:39:59 <Jafet> perhaps xzibit would have been a good string algorithms researcher
02:40:44 <ais523> is there a word for the relationship expressed by that meme that's more precise than just "recursion"?
02:41:13 <ais523> (for reference, the "reference implementation" of that meme is "I put a car in your car so you can drive while you drive")
02:42:19 <Jafet> I'm not sure, but such a word could also describe the work that someone once did benchmarking nested self-interpreters
02:43:04 <Jafet> oh, the eigenratio website still exists
02:45:30 <ais523> I've been thinking about nested self-interpreters quite a bit recently
02:45:50 <ais523> trying to work out what sort of language would naturally have an eigenratio of 1 for most obvious ways of writing a self-interpreter
02:52:21 <Jafet> if you take a recursive unit cell grid in conway's life and run it with hashlife you should technically get an eigenratio of 1
02:53:26 <Jafet> I'd test this, but nesting a unit cell sounds like something I'd need to generate with a script and I don't care enough
02:55:27 -!- augur has quit (Remote host closed the connection).
03:00:43 <shachaf> Why would you need to nest it with a script?
03:01:28 <shachaf> If you have some way to describe macrocells, you should just be able to do a simple substitution or something for the next level.
03:02:05 <Jafet> well, I would do such a substitution with a script
03:02:21 <Jafet> I'm not sure if macrocell identifiers are required to be increasing
03:02:24 -!- ATMunn has quit (Remote host closed the connection).
03:02:28 <Jafet> that would make it more annoying
03:02:42 <imode> Jafet: storing trees, even partial trees, via pointers is not succint, afaict.
03:03:50 -!- augur has joined.
03:04:39 <Jafet> apparently the nodes are numbered implicitly, so I'd have to change all the numbers when combining macrocell files
03:05:09 <shachaf> Is there a standard format for describing hashlife states?
03:05:40 <Jafet> yes, I think that format is called macrocell
03:06:03 <Jafet> well, not if you also want the cached results
03:08:58 <ais523> Jafet: ah right, hashlife is a good example here
03:09:15 <Jafet> it seems that nobody wants the cached results, though, not even golly, which clears the cache every GC cycle (even for results that didn't get GC'd!)
03:11:00 <Jafet> ais523: now if you had a simple functional language that, unlike a really overrated CA from 1970, could express the notion of a memoizing implementation of itself
03:11:29 <ais523> it's pretty easy if you're OK with programs like if (false) while (true); not terminating
03:11:36 <ais523> but that's a pretty big restriction
03:15:15 <Jafet> so they found the unit cell and hashlife but failed to see that the resulting eigenratio is 1
03:15:49 <Jafet> But the main point as far as this blog goes is that "Life" has a self-interpreter, and it's eigenratio is exactly 5760! — http://eigenratios.blogspot.de/2007/09/self-interpreter-for-conways-game-of.html
03:17:49 <ais523> a self-interpreter that works from finitely many starting cells would be rather more impressive :-)
03:18:04 <ais523> also should be possible, and might even be possible with the same ratio
03:19:24 <Jafet> well, you only need to invent a fast enough breeder that lays more unit cells
03:19:39 <Jafet> it would probably have a larger period that 5760 though
03:22:18 <ais523> if it fits within 11520 it would be fine
03:22:47 <ais523> the speed of light might be the absolute speed limit in Life, but if starting from a finitely large pattern, things can't escape the pattern boundary faster than c/2 in the long term
03:26:15 <shachaf> Can you have a non-empty background for a finite pattern?
03:26:40 <shachaf> Some sort of infinite pattern that lets you communicate information more quickly.
03:27:01 <shachaf> I guess you would want all your patterns to preserve it.
03:30:29 <Jafet> well, a breeder for unit cells would likely have a six-digit period
03:32:23 <Jafet> (or more than six, but the unit cell seems to be made of standard parts so a fast glider synthesis shouldn't be too hard)
03:33:43 <Jafet> imode: a succinct data structure is just one that has less than O(n) overhead
03:35:24 <Jafet> this generously includes trees with O(n/(log n)^(1+ε)) pointers of O(log n) bits each
03:35:33 <imode> again, that is not the lower bound.
03:36:42 <ais523> shachaf: in general, yes, but I'm thinking about the specific case of an only-dead-cells background
03:37:56 <Jafet> golly supports a toroidal grid, which could be expanded to support a periodic background
03:39:08 <shachaf> Presumably a periodic background is reasonably easy to implement in hashlife -- you just need to change the way you grow the grid.
03:39:40 <Jafet> yes, though having to pad it to powers of 2 would be annoying
03:58:44 -!- PattuX has quit (Quit: Connection closed for inactivity).
04:01:36 <ais523> I don't think multiocular O is a common piece of computational order notation
04:04:38 <Jafet> creationists use it to denote information lower bounds — the eyes signify irreducible complexity
04:05:04 <shachaf> Nor is it a common character in Cyrillic manuscripts.
04:05:27 <shachaf> Creationists? Is that people who use ꙮ_CREAT?
04:07:10 <Jafet> ꙮ̃ is used when a log gets in the eye, or perhaps a 2-by-4
05:00:26 -!- btiffin has quit (Remote host closed the connection).
05:13:57 -!- olsner_ has changed nick to olsner.
05:15:03 <zzo38> Apparently some Java-based HTTP client interpreted "gopher://zzo38computer.org" as a relative URI, even though clearly by its syntax it isn't.
05:15:07 <Hoolootwo> golly supports one perodic background, but that's only for b0s8 rules
05:15:38 <Hoolootwo> and where the background switches from on to off every generation
05:15:44 <Jafet> the parity hack doesn't really count
05:16:09 <Hoolootwo> http://golly.sourceforge.net/Help/Algorithms/QuickLife.html
05:16:22 <Hoolootwo> I think that explains it better than I could here
05:26:44 -!- xkapastel has joined.
05:31:45 <zzo38> What does "eigenratio" mean here?
05:33:01 <shachaf> Oh, b,s means born,survive
05:34:36 <Hoolootwo> oh, :/ thought that page said that
05:43:28 <ais523> zzo38: "zzo38computer.org" is technically a relative domain name; the absolute version is "zzo38computer.org."
05:43:38 <ais523> however, for some reason it became standard to write URLs without the trailing dot
05:44:35 <zzo38> ais523: OK, but is still not a relative URI
05:44:44 <shachaf> Do you mean it interpreted it as ./gopher:/zzo38computer.org?
05:45:34 <zzo38> Yes, that is what it did, it look like
05:48:45 -!- augur has quit (Remote host closed the connection).
05:53:44 <shachaf> If I get a vanity TLD, can I put an MX record on it?
05:54:55 <ais523> there's no technical restriction against that
05:55:12 <ais523> there might or might not be a political one (e.g. ICANN only agreeing to sell you the TLD if you don't host anything on the TLD directly)
05:55:32 <ais523> or, well, it's a known fact as to whether or not there's a political restriction, but not known by me
05:57:18 <shachaf> There was a URL shortener on a two-letter country TLD once.
05:57:53 <shachaf> I bet lots of bad email regexps would reject a email address like that.
06:03:21 <Jafet> hmm https://serverfault.com/questions/154991/why-do-some-tld-have-an-mx-record-on-the-zone-root-e-g-ai
06:10:09 <Jafet> I wonder if /bin/hostname should ship with a copy of this table
06:10:43 <Jafet> I guess that would only solve half the problem
06:28:01 <Jafet> shachaf: a table of TLDs with strange DNS records
06:30:35 <Jafet> seems that ai. no longer has an MX record, though it still has A, NS, and a conspicious lack of SOA
06:30:39 -!- erkin has joined.
06:44:08 -!- erkin has quit (Quit: Ouch! Got SIGABRT, dying...).
06:45:08 -!- newsham has quit (Ping timeout: 260 seconds).
06:49:58 <shachaf> Jafet: It looks like it has an MX record to me?
06:51:26 -!- FreeFull has quit.
06:51:47 <shachaf> Is .home a generic TLD? It would make a good email address for inquiries regarding distributed computing projects.
07:00:18 -!- erkin has joined.
07:03:49 -!- hakatashi has joined.
07:14:33 -!- newsham has joined.
07:17:27 -!- ais523 has quit (Ping timeout: 260 seconds).
07:34:53 -!- doesthiswork has quit (Quit: Leaving.).
07:49:19 -!- oerjan has joined.
07:59:10 -!- ybden has quit (Ping timeout: 240 seconds).
08:01:45 -!- ybden has joined.
08:12:57 -!- erkin has quit (Read error: Connection reset by peer).
08:13:29 -!- erkin has joined.
08:21:33 <deltab> no (but homes and house are)
08:21:43 <deltab> Wikipedia says "BT hubs use the top-level pseudo-domain home for local DNS resolution of routers, modems and gateways."
08:22:48 <deltab> https://en.wikipedia.org/wiki/List_of_Internet_top-level_domains
08:26:08 -!- hppavilion[1] has quit (Ping timeout: 240 seconds).
08:41:56 -!- tuu has joined.
08:59:41 -!- sebbu has joined.
09:44:58 -!- imode has quit (Ping timeout: 276 seconds).
10:04:56 -!- oerjan has quit (Quit: Later).
10:05:14 <\oren\> look at this shit http://imgur.com/k86avnF
10:07:58 <\oren\> I think an instersection between 6 or more streets should be required to be a roundabout
10:25:07 -!- mroman has joined.
10:25:22 <mroman> esolangs.orgc is down.
10:26:21 -!- xkapastel has quit (Quit: Connection closed for inactivity).
11:25:59 <fizzie> I can't count the number of times that has been said already.
11:26:09 <fizzie> I do have alerting on it as well.
11:27:12 <fizzie> Anyway, will set up the backup thing properly once I get home from the airport and unpack a little.
11:34:01 -!- boily has joined.
11:35:41 -!- PattuX has joined.
11:48:24 <mroman> well.. I'm not pressuring you.
11:48:42 <mroman> it's really the least important site in my life.
11:52:08 <mroman> the most important is int-e's cheap server :D
11:52:13 <mroman> because it hosts the online shell.
12:03:27 -!- zseri has joined.
12:11:17 -!- jaboja has joined.
12:12:35 <boily> fungot: can you be HackEgo?
12:12:35 <fungot> boily: you you can start ' em in the paper
12:12:51 * boily starts HackEgo in the paper
12:27:55 -!- boily has quit (Quit: DECLARED CHICKEN).
12:49:44 -!- zseri has quit (Ping timeout: 260 seconds).
12:54:47 -!- zseri has joined.
13:02:25 -!- heroux has quit (Ping timeout: 246 seconds).
13:02:35 -!- heroux has joined.
13:10:13 <mroman> - + + + ] + > [ [ + > < > ] - > [ - ] ] [ - < - + + ] - < < - > > + < - > [ < ] + > - + ] < ] < + - < - - [ < ] >
13:27:44 <mroman> stupid evolver produces stupid programs
13:28:13 <mroman> has anybody ever done evolving html/css
13:28:16 <mroman> to fit a specific design
13:38:02 -!- Labeo has joined.
13:42:20 -!- Labeo has quit (Quit: Mutter: www.mutterirc.com).
13:44:31 -!- Labeo has joined.
13:47:41 -!- LKoen has joined.
13:48:31 -!- Labeo has quit (Client Quit).
14:00:30 -!- doesthiswork has joined.
14:01:38 -!- Labeo has joined.
14:03:14 -!- mroman has quit (Ping timeout: 260 seconds).
14:06:55 -!- ais523 has joined.
14:13:54 -!- Labeo has quit (Quit: Mutter: www.mutterirc.com).
14:16:11 -!- erkin has quit (Ping timeout: 255 seconds).
14:22:54 -!- atslash has joined.
14:27:12 -!- erkin has joined.
14:27:57 -!- jaboja has quit (Ping timeout: 260 seconds).
14:32:07 -!- Mayoi has joined.
14:32:14 -!- erkin has quit (Disconnected by services).
14:37:42 -!- zseri has quit (Quit: Page closed).
14:39:40 -!- ais523 has quit (Remote host closed the connection).
14:40:50 -!- ais523 has joined.
14:42:09 -!- `^_^v has joined.
14:48:27 -!- LKoen has quit (Remote host closed the connection).
14:52:19 -!- tuu has quit (Remote host closed the connection).
14:59:41 -!- jaboja has joined.
15:03:26 -!- doesthiswork has quit (Quit: Leaving.).
15:05:25 -!- __kerbal__ has joined.
15:06:34 <__kerbal__> Does anyone know exactly what is wrong with the wiki?
15:07:25 <myname> no, we only heard that question like a dozen times the last hours
15:16:22 -!- ATMunn has joined.
15:16:22 -!- ATMunn has quit (Changing host).
15:16:22 -!- ATMunn has joined.
15:18:10 -!- jaboja has quit (Ping timeout: 240 seconds).
15:21:11 <__kerbal__> https://www.youtube.com/watch?v=HuCJ8s_xMnI
15:21:20 <__kerbal__> One of the weirdest videos I've seen in a while
15:29:43 -!- Mayoi has quit (Quit: Ouch! Got SIGABRT, dying...).
15:42:12 -!- Bowserinator has quit (Excess Flood).
15:42:22 -!- Bowserinator has joined.
15:42:45 -!- Bowserinator has changed nick to Guest82305.
15:43:42 -!- augur has joined.
15:43:57 -!- __kerbal__ has quit (Quit: Page closed).
15:47:59 -!- augur has quit (Ping timeout: 255 seconds).
15:48:05 <rdococ> Heh, division is weird. You could consider multiplication its "opposite", but considering modulo its opposite also makes sense. :P
15:50:47 -!- contrapumpkin has joined.
15:57:56 -!- Guest82305 has changed nick to Bowserinator.
15:57:57 -!- Bowserinator has quit (Changing host).
15:57:57 -!- Bowserinator has joined.
15:58:45 <ATMunn> so uh, can someone exlpain funge-98's stack stack to me? im having trouble understanding the commands it uses
16:01:34 -!- ais523 has quit (Remote host closed the connection).
16:02:44 -!- ais523 has joined.
16:23:18 -!- LKoen has joined.
16:46:20 -!- Lord_of_Life has quit (Remote host closed the connection).
16:59:35 -!- LKoen has quit (Remote host closed the connection).
17:15:05 <rdococ> Concept: like the "break n;" idea, but with returning values. "return<2> x;", for example, would return x and force the function that called it to immediately return x too.
17:17:07 <myname> that would break encapsulation a lot
17:17:24 -!- Lord_of_Life has joined.
17:17:57 -!- LKoen has joined.
17:22:04 -!- AnotherTest has joined.
17:29:43 -!- LKoen has quit (Remote host closed the connection).
17:38:00 -!- FreeFull has joined.
17:38:12 -!- LKoen has joined.
17:39:31 -!- erkin has joined.
17:41:32 -!- AnotherTest has quit (Read error: Connection reset by peer).
17:41:51 -!- AnotherTest has joined.
17:44:27 -!- augur has joined.
17:45:55 -!- augur has quit (Remote host closed the connection).
17:47:13 -!- augur has joined.
17:58:44 -!- zseri has joined.
18:02:13 -!- AnotherTest has quit (Ping timeout: 276 seconds).
18:20:31 -!- AnotherTest has joined.
18:25:28 -!- LKoen has quit (Remote host closed the connection).
18:38:39 -!- AnotherTest has quit (Ping timeout: 255 seconds).
18:45:53 -!- imode has joined.
18:48:21 -!- AnotherTest has joined.
19:03:13 <ais523> rdococ: that operation exists in INTERCAL
19:03:22 <ais523> in fact, it's the only way to do flow control in INTERCAL-72
19:11:34 -!- erkin has quit (Quit: Ouch! Got SIGABRT, dying...).
19:13:38 -!- ais523 has quit (Ping timeout: 240 seconds).
19:35:08 -!- AnotherTest has quit (Ping timeout: 240 seconds).
19:40:42 -!- AnotherTest has joined.
19:48:41 -!- LKoen has joined.
20:01:09 -!- LKoen has quit (Remote host closed the connection).
20:01:29 -!- wob_jonas has joined.
20:02:04 <wob_jonas> ais523: I don't think that's the same. Intercal has multi-level return, that is, it can pop multiple entries from the return stack and return to the last one popped.
20:02:45 <myname> how is this different?
20:02:47 <wob_jonas> You could actually emulate that in GW-BASIC, which has a form of the RETURN statement that pops the return stack but jumps to a constant line in the statement.
20:03:04 <wob_jonas> myname: I think the original question was a multi-level break. As in, from while or for or do-while loops
20:03:35 <myname> it was a multi-level return like the existing multi-level break
20:07:45 -!- AnotherTest has quit (Ping timeout: 248 seconds).
20:08:30 -!- AnotherTest has joined.
20:09:55 -!- oerjan has joined.
20:14:35 <zzo38> Do you know when to expect fixing esolang wiki?
20:15:37 <zzo38> wob_jonas: Yes, I have used that before, using RETURN to jump to a different line number (and have used it once to RETURN to the next line which is a RETURN to a constant line number, even)
20:17:32 -!- AnotherTest has quit (Ping timeout: 255 seconds).
20:19:35 -!- AnotherTest has joined.
20:19:40 <zzo38> PHP has multi-level break by number, while JavaScript has multi-level break by name. (Although I happen to think goto would be a better way of doing this anyways; you don't need much more than the single-level break/continue, as well as goto)
20:21:00 <Cale> zzo38: Just throw in callCC and call it a day
20:22:34 <ATMunn> :( why are there no good befunge-98 interpreters for windows
20:22:57 <myname> because developers don't use windows?
20:23:51 <ATMunn> there's not even any good online ones :(
20:24:21 <ATMunn> at least, i havent found any
20:24:31 <myname> i don't know which one anymore, though
20:24:38 <myname> i used to have one modified
20:25:10 <wob_jonas> actually there are way more developers who use windows than developers who use befunge
20:28:14 <zzo38> One of the features of NNIX is that the file number has to be a constant and it does not support variable file numbers. Do you know why?
20:29:18 <wob_jonas> zzo38: because it's just a toy OS interface that's enough for the examples in the book, not a real complete operating system?
20:29:37 <zzo38> I think it should be fixed
20:30:13 <wob_jonas> zzo38: it also doesn't support modifying an already written file without erasing its data first, that's much more annying IMO
20:30:45 <wob_jonas> (and that's despite that the book claims it supports everything the file interface of C89 can do except remove files)
20:31:25 <wob_jonas> but in any case, the OS interface is extensible, an OS could add new system calls, it doesn't intend to be complete and closed like the CPU architecture itself
20:32:29 <zzo38> Yes, that is another think to fix. There are a few other things too, such as adding file control interface, and perhaps a convenience function for reading/writing one value to/from $255.
20:32:42 <ATMunn> myname: also, at some point ill get linux, but for now im stuck on windows so i have no choice but to use a windows or browser based one :/
20:32:48 <wob_jonas> what do you mean "file control interface"?
20:33:02 <zzo38> Similar to fcntl()
20:33:26 <zzo38> (Although you don't need all of the functions of fcntl)
20:34:38 <zzo38> Also similar to ioctl() for some devices
20:35:57 <wob_jonas> again, he only needs a little of OS interface for his examples. he did say he doesn't intend to create a full operating system.
20:36:01 <Cale> wob_jonas: hah, I actually kind of like that thing about not allowing modification of files after the fact.
20:36:11 <wob_jonas> if you want a full OS, just imagine a unix-like running on MMIX
20:36:37 <wob_jonas> Cale: you can modify files, it's just you can only do so if you do the equivalent of O_TRUNC
20:36:49 <zzo38> Actually fcntl() probably isn't needed, but a few of the controls of ioctl() may be, mainly the terminal controls.
20:39:48 <zzo38> One possible way that could be done is to add additional command-line arguments to the simulator to load .so files assigned to different X values in the TRAP instructions, where 0 means to use the built-in stuff.
20:40:16 <zzo38> That way you could add one extension for connecting to the X server, one extension for music, and so on
20:49:26 <wob_jonas> I'm trying to rig up some method to photograph a book, for which I need to hold both the book and the camera in place. But I am failing miserably, because I'm really bad at hardware stuff, and don't have many things to use at home.
20:51:05 <wob_jonas> (and it shouldn't obstruct lighting, which is basically impossible since I want to get the camera close to the book)
20:51:27 -!- AnotherTest has quit (Ping timeout: 240 seconds).
20:52:12 -!- AnotherTest has joined.
21:05:48 -!- AnotherTest has quit (Ping timeout: 240 seconds).
21:06:48 -!- AnotherTest has joined.
21:18:40 <Cale> wob_jonas: There are scanners which can feed a stack of pages through, if you don't mind destroying the book
21:20:33 <Cale> There was one point when I was in highschool where I helped a friend of mine make a digital version of his mother's cookbook, and we cut the ring binding off a copy and fed it through a scanner, and then OCR'ed it... and OCR was terrible back then, so I had a lot of hand-editing to do. :P
21:21:55 <Cale> Still probably amounted to less work than typing out the whole book though
21:24:31 <wob_jonas> Cale: I don't want to destroy the book. A flatbed scanner is a good idea in general, and I did think of it,
21:25:37 <wob_jonas> but the problem is that the scanner I have access to has a maximum scan area of only slightly bigger than A4, and this book is bigger than that. The page content might just barely fit in that area, but I couldn't position it right.
21:26:28 <wob_jonas> Although destroying a book isn't such a bad idea actually. I didn't think of that. I can't destroy this library copy, but I might be able to locate a new or used copy of this that I can destroy.
21:26:52 <wob_jonas> That would make this somewhat easier, because then I only have to position the individual pages, but it's still not easy
21:28:06 -!- xkapastel has joined.
21:28:35 <Hoolootwo> the easiest way to mount a normal digital camera is by the mount on the bottom, usually
21:29:01 <Hoolootwo> I think it's a 1/4-20 thread, you could bolt it to something solid above the book
21:30:17 <wob_jonas> Hmm, it's not expensive. I could buy and destroy a copy.
21:30:26 <wob_jonas> I'd still have to figure out how exactly to photograph or scan it though.
21:35:34 <wob_jonas> Is there somewhere I can just borrow a flatbed scanner larger than A4?
21:40:32 <Hoolootwo> what if you scanned the pages in two parts sideways and stitched them back together
21:40:44 <Hoolootwo> print shops would probably have a big scanner
21:40:53 <wob_jonas> Hoolootwo: would be hard to stitch them accurately
21:41:01 <wob_jonas> and to position the pages accurately that way
21:41:04 <Hoolootwo> is there not software for that? :/
21:41:16 <Hoolootwo> I guess automatedly doing it is probably hard
21:41:29 <wob_jonas> It would be much better if scanned together
21:42:01 <wob_jonas> The image quality matters here. If it didn't, I'd just shoot the pictures with the camera handheld and be done with it.
21:42:37 <Hoolootwo> with scanning, I think you have plenty of resolution
21:42:57 <wob_jonas> Exactly, that's why a scanner would be better
21:45:26 <wob_jonas> Apparently this print shop has A3 sized scanners I can use (for a fee obviously)
21:49:12 -!- sleffy has joined.
21:49:26 <zzo38> Now I made up a way to make operating system interface extensions into the MMIX simulation. It is: http://sprunge.us/PAdY
21:53:15 <wob_jonas> I think I'll order a copy of this book.
21:57:12 <wob_jonas> I'll be able to get it in a few days. Then I can decide whether I want to scan it whole or cut up.
21:57:21 <wob_jonas> Cut up would probably be more precise.
22:02:34 -!- wob_jonas has quit (Quit: http://www.kiwiirc.com/ - A hand crafted IRC client).
22:07:38 -!- AnotherTest has quit (Ping timeout: 240 seconds).
22:08:09 -!- AnotherTest has joined.
22:11:22 -!- LKoen has joined.
22:12:42 -!- hppavilion[1] has joined.
22:17:23 -!- `^_^v has quit (Quit: Leaving).
22:17:42 -!- `^_^v has joined.
22:20:50 -!- paul2520 has joined.
22:20:50 -!- paul2520 has quit (Changing host).
22:20:50 -!- paul2520 has joined.
22:21:43 -!- AnotherTest has quit (Ping timeout: 258 seconds).
22:22:52 -!- AnotherTest has joined.
22:27:08 -!- AnotherTest has quit (Ping timeout: 240 seconds).
22:40:16 -!- btiffin has joined.
22:42:23 -!- `^_^v has quit (Quit: This computer has gone to sleep).
22:43:57 -!- oerjan has quit (Quit: Nite).
22:51:07 -!- zseri has quit (Quit: Page closed).
22:56:44 -!- ais523 has joined.
22:59:20 -!- boily has joined.
23:14:35 <boily> `w -- Will HackEgo ever be again \\ S'enfargea-t-il dans un tranche de pain \\ I need my random wisdom \\ Peut-être est-il tombé dans les pommes?
23:14:55 <FreeFull> https://github.com/aaronduino/asciidots/
23:24:23 -!- LKoen has quit (Quit: “It’s only logical. First you learn to talk, then you learn to think. Too bad it’s not the other way round.”).
23:24:55 <shachaf> fizzie: less fizzie, more fixxie twh
23:35:50 -!- imode has quit (Ping timeout: 240 seconds).
23:59:27 -!- augur has quit (Ping timeout: 240 seconds).