00:00:08 <hppavilion[1]> shachaf: I can't find what pointwise indexing means 
00:00:27 <int-e> I'm confused by the [punctuation] 
00:01:06 <int-e> and [] is somewhat overloaded. 
00:01:32 <int-e> `learn Nits are there to be picked. 
00:01:35 <HackEgo> Learned 'nit': Nits are there to be picked. 
00:02:35 -!- p34k has quit. 
00:03:37 <boily> nits are louse eggs hth 
00:03:39 <oerjan> int-e: um it means it's optional? 
00:04:34 <oerjan> well, technically that's also optional hth 
00:05:13 <HackEgo> Learned 'optional.': optional. 
00:05:31 <int-e> `` cd wisdom; grep '\.\.\.' * 
00:05:42 <HackEgo> arothmorphise:arothmorphise ... antormo... antrohm... ant... oh bugger. This should go in the `misspellings of antrhrop... atnhro...' entry. \ code:[11,11,11,15,15,23,12],[5,5,5,3,53,45,16,26,00,20,15,16,22,25,45,91,32,11,15,27,06,01,11,01,47,22,30,13,43,21,11,13,29,61,65,17,19,12,28,17,11,01,23,20,16,20,81,18,32,25,58,22.,1985,10.301350435,1555466 
00:05:45 <HackEgo> #!/bin/bash \ topic=$(echo "$1" | lowercase | sed 's/^\(an\?\|the\) //;s/s\?[:;,.!?]\? .*//') \ echo "$1" >"wisdom/$topic" \ echo "Learned '$topic': $1" 
00:06:23 <oerjan> oh right, the space is not optional if it's to remove any of the rest 
00:06:24 <int-e> `` cd wisdom; grep -l '\.\.\.' * 
00:06:25 <HackEgo> arothmorphise \ code \ hthmonoid \ grep: le: Is a directory \ learn \ `learn \ northumberland \ grep: ¯\(°_o): Is a directory \ grep: ¯\(°_o): Is a directory \ \oren\ \ procrastination \ qdb \ quoteformat \ remorse 
00:06:44 <HackEgo> Northumberland may be today a sparsely populated country... but SOON! THE NORTHUMBRAINS SHALL RISE! 
00:07:22 <int-e> `culprits wisdom/northumberland 
00:07:26 <HackEgo> oerjan elliott Bike FreeFull Taneb 
00:08:29 <shachaf> hppavilion[1]: It means each element in the tuple gets indexed on its own. 
00:09:07 <hppavilion[1]> shachaf: https://en.wikipedia.org/wiki/Tuple does not speak of "indexing" 
00:09:11 <shachaf> hppavilion[1]: Try figuring out what indexing would mean and I'll tell you whether it's right. 
00:09:24 <shachaf> Well, this is indexing in the usual sense. 
00:10:19 <hppavilion[1]> shachaf: So... hm... OH! Is it at all like ~ in INTERCAL? 
00:10:37 <oerjan> <int-e> *reads about the ending phase* ...  could there be an infinite loop of cleanup steps... <-- you should reask that with ais523 around hth 
00:11:02 <hppavilion[1]> shachaf: x~y is all the bits of x for which the corresponding bit in y is 1, right-justified 
00:11:13 <int-e> shachaf: it's ALL CAPS, what else could it be... I mean now that COBOL is dead? 
00:11:21 <ais523> int-e: there can be an infinite loop of cleanup steps, yes 
00:11:35 <ais523> it's a little hard to pull off because cards are typically designed to stop things triggering then 
00:11:52 <shachaf> help when did this turn into a mtg conversation 
00:12:06 <int-e> shachaf: oerjan looking through logs 
00:12:10 <hppavilion[1]> shachaf: What I mean is the compostion of e.g. (17, 92, 12) and (1, 2) equal to (17, 92)? 
00:12:48 <shachaf> hppavilion[1]: What are the domains and codomains of those arrows? 
00:13:45 <oerjan> hppavilion[1]: what shachaf means is that an arrow is not determined by its tuple alone 
00:13:48 <hppavilion[1]> shachaf: Or do you mean which numbers in particular for those arrows? 
00:14:28 <shachaf> An arrow : N -> M is an N-tuple of numbers < M 
00:14:30 <int-e> well, graphs are categories 
00:14:32 -!- sphinxo has joined. 
00:14:51 <hppavilion[1]> shachaf: Ah, I think I transcribed it to my notes wrong 
00:14:53 <shachaf> But M could be 100 or 1000 
00:14:56 <int-e> reflexive, transitive relations are 
00:15:07 <HackEgo> sphinxo: Welcome to the international hub for esoteric programming language design and deployment! For more information, check out our wiki: <http://esolangs.org/>. (For the other kind of esoterica, try #esoteric on EFnet or DALnet.) 
00:15:12 <int-e> (that's the example that I wanted) 
00:15:36 <hppavilion[1]> shachaf: Oh, so the arrows map numbers to all numbers greater than them, right 
00:15:58 <sphinxo> So what's the bees knees in esoteric langs? 
00:16:23 <ais523> sphinxo: in terms of newly popular? best-known? 
00:16:40 <int-e> sphinxo: well your puns seem to be up to par... welcome! 
00:17:06 <ais523> hmm, not sure if any esolangs have really caught on since Three Star Programmer 
00:17:11 <shachaf> int-e: whoa whoa whoa, when did this turn into a linear logic conversation 
00:17:27 <int-e> shachaf: you lost me 
00:17:47 <int-e> oerjan: the bees one 
00:18:17 <hppavilion[1]> sphinxo: One that isn't popular- but be used by at least one person in the world someday, if I'm being generous- is a proof assistant I myself made called Thoof 
00:18:24 <oerjan> i didn't notice it was a pun 
00:18:37 <hppavilion[1]> sphinxo: Based on Thue, which is a great language you should check out if you haven't already 
00:18:48 <int-e> oerjan: flew right over your head, eh... 
00:19:17 <sphinxo> hppavilion[1]: is it on github? 
00:19:55 <hppavilion[1]> sphinxo: But there are no published docs yet; however, I can publish the as-of-yet incomplete tutorial if you like 
00:20:04 <sphinxo> hppavilion[1]: Oh wait I think i've found it, in python right? 
00:20:20 <shachaf> Oh, I thought you were talking about hppavilion[1]'s brain. 
00:20:27 <shachaf> The joke seemed a little drawn out. 
00:20:35 <oerjan> int-e: well, bee's knees did fit there without having to reinterpret it. 
00:21:20 <hppavilion[1]> shachaf: Gah! Your and sphinxo's nicks arethe same length and both start with s! 
00:21:36 <shachaf> you're already always confused hth 
00:22:11 <shachaf> boily: have you figured out the mysterious category twh 
00:22:34 <oerjan> hppavilion[1]: a single starting letter seems a bit little to be confusing. 
00:22:41 <boily> which mysterious category? 
00:22:50 <shachaf> Oh, apparently this category has a name. 
00:23:08 <oerjan> shachaf: isn't it just a subcategory of Set 
00:23:13 <sphinxo> In the spirit of self promotion, i'd like to present one of my first forays into the world of #esoteric 
00:25:46 -!- sphinxo has left ("WeeChat 1.4"). 
00:26:13 -!- sphinxo has joined. 
00:28:02 <ais523> sphinxo: weird mix of languages :-) 
00:28:06 <ais523> (in here, that's probably a good thing) 
00:28:23 <ais523> makes sense though, ocaml's good at compilers, jvm is probably the most portable asm 
00:28:29 <shachaf> ais523: Do you understand par in linear logic? TWH 
00:28:40 -!- tromp has joined. 
00:28:50 <ais523> shachaf: what do you mean by par? I fear the answer is no 
00:28:56 <ais523> I understand the subsets of linear logic I use in my work 
00:28:57 <shachaf> The upside-down ampersand. 
00:29:15 <sphinxo> ais523: it was my first time doing ocaml actually 
00:29:22 <shachaf> Or ?A the exponential thing? 
00:29:26 <sphinxo> but I didn't really like it and went back to haskell 
00:29:45 <ais523> shachaf: _|_ is just "arbitrary false statement" in most logics 
00:29:56 <shachaf> sphinxo: Oh, that's where I remember you from. 
00:30:02 <ais523> I sort-of have a vague idea of how ? works but not enough to put it into words 
00:30:22 <shachaf> ais523: Well, there's _|_ and there's 0 
00:30:52 <ais523> linear logic sort-of has SCI syndrome 
00:30:55 <sphinxo> shachaf: yeah i'm the one generally asking the silly questions 
00:30:56 <ais523> but possibly even worse 
00:31:32 <ais523> (SCI is an affine logic, which has the problem that ('a * 'b) -> 'c and 'a -> ('b -> 'c) aren't isomorphic and most language constructs need to work both ways round) 
00:31:36 <ais523> syntactic control of interference 
00:31:45 <shachaf> This game semantics interpretation made the most sense to me. 
00:31:57 <shachaf> ais523: Oh, it has both an internal hom and a product but they're not adjoint? 
00:32:13 <shachaf> The product has no right adjoint and the internal hom has no left adjoint? 
00:32:33 <ais523> it causes utter chaos at the category theory level 
00:32:41 <ais523> in terms of programming it, it's only mildly annoying 
00:32:47 <sphinxo> y'all played tis-100? I imagine that'd be right up you guys/girls boats 
00:33:05 <shachaf> ais523: Sounds sort of reasonable. Maybe. 
00:33:06 <ais523> annoying enough, though, that SCI errors are something that I have to keep correcting in other people's code 
00:33:34 <shachaf> ais523: Anyway in this game semantics interpretation, when you have A#B, you run two games in parallel, one for A and one for B. 
00:33:40 <ais523> quite a bit of work on my thesis was trying to create a more categorically sensible SCI 
00:33:40 <shachaf> And you only have to win one of them. 
00:33:59 <shachaf> So for instance A # ~A is always true, because if you get a refutation on one side you can use it on the other side. 
00:34:07 <ais523> it turns out that it has hidden intersection types 
00:34:18 <shachaf> ais523: Hmm, I should read your thesis. 
00:34:23 <ais523> shachaf: hmm, that makes me think of a programming language construct 
00:34:35 <ais523> in which you give two terms, it returns one of its argument 
00:34:51 <ais523> but it's guaranteed to return something other than bottom unless both arguments are bottom 
00:35:10 * ais523 wonders if the Haskell people would consider that pure 
00:36:35 <shachaf> ais523: Haskell people probably want a guarantee that they're equal unless they're bottom. 
00:36:42 <shachaf> https://wiki.haskell.org/Unamb 
00:37:16 <ais523> now I'm wondering if it's useful 
00:37:19 <ais523> I guess you could do sorting with it 
00:37:39 <ais523> one argument an O(n log n) worst case, the other an O(n) best case that sometimes blows up 
00:37:47 <shachaf> http://conal.net/blog/tag/unamb 
00:39:39 -!- tromp has quit (Remote host closed the connection). 
00:43:23 <shachaf> ais523: Oh, A # B is also ~(~A x ~B) 
00:45:51 -!- heroux has quit (Ping timeout: 250 seconds). 
00:55:27 -!- sphinxo has quit (Quit: WeeChat 1.4). 
01:01:45 -!- heroux has joined. 
01:02:54 -!- llue has quit (Quit: That's what she said). 
01:03:03 -!- lleu has joined. 
01:07:37 <boily> mwah ah ah. Tiamat is dead! 
01:08:25 <boily> dragonskin cloak is miiiiine! 
01:09:05 -!- tromp has joined. 
01:11:40 -!- carado has quit (Quit: Leaving). 
01:15:28 -!- Phantom_Hoover has quit (Read error: Connection reset by peer). 
01:15:42 -!- mad has joined. 
01:16:16 <mad> will someone explain this to me: why some programmers use C but have an aversion to C++ 
01:17:02 <mad> (especially on non-embedded platforms) 
01:19:32 <pikhq> Because the things that C++ is good at, C is about as good at, and the things that C++ does better than C, other languages do significantly better. So, C++ is a giant pile of complexity with minimal benefits. 
01:21:12 <mad> er, no, there is one class of stuff where C doesn't have the tools (like, you can do it but it's cumbersome), and java/C#/etc can't do it because of the mandatory garbage collector 
01:21:40 <mad> once you have lots of dynamic sized stuff C++ has a large advantage over C 
01:22:24 <pikhq> You know that there's languages out there besides C-family languages, Java-family languages, and P-family languages, right? 
01:22:46 -!- lynn has quit (Ping timeout: 252 seconds). 
01:23:01 <mad> this is why C++ is popular for making games (too much dynamic sized stuff for C, can't use java/C# because garbage collector creates lags) 
01:23:18 <pikhq> ais523: Gregor's joking name for Perl, Python, Ruby, etc. 
01:23:43 <mad> pikhq: what other language category is there? functional languages? 
01:24:37 <mad> the other languages I can think of generally aren't particularly fast 
01:25:26 <pikhq> https://en.wikipedia.org/wiki/Template:Programming_paradigms *cough* 
01:25:55 <pikhq> There's more programming language *categories* than you think there are languages, it sounds like. :) 
01:26:42 <pikhq> izabera: Gregor Richards, one of the channel members who's not been that active of late. 
01:26:54 <pikhq> He's still here though 
01:26:58 <pikhq> Gregor: Isn't that right? 
01:27:05 <mad> pikhq : that list is a split by paradigm, not by speed grade 
01:27:34 <pikhq> mad: C++ ain't exactly "fast" in idiomatic use... 
01:27:57 <pikhq> I mean, sure, you can write fast C++, but once you're using the STL you've abandoned all hope. 
01:28:06 <ais523> izabera: Gregor's most famous for writing EgoBot and HackEgo 
01:28:17 <mad> pikhq : not if you're using STL right 
01:28:20 <fizzie> I thought he was most famous for the hats. 
01:28:24 <HackEgo> Gregor took forty cakes. He took 40 cakes. That's as many as four tens. And that's terrible. 
01:28:39 <oerjan> it wasn't always laggy 
01:28:43 <mad> ie basically as a replacement for arrays [] except it manages the size 
01:28:58 <pikhq> Also, I wouldn't take game developers as a good example of "how to write programs"... 
01:29:09 <ais523> oerjan: if you want a cheap bot, see glogbackup (which is also Gregor's) 
01:29:59 <pikhq> Unmaintainable piles of shit that are written by the sort of people who are willing to accept 80 hour workweeks are par for the course. 
01:30:37 <izabera> that's a rant i've never heard 
01:31:02 <izabera> what's the problem with working too many hours a week? 
01:31:04 -!- Sgeo has quit (Ping timeout: 260 seconds). 
01:31:42 <pikhq> Um, humans are kinda bad at being productive that long. Especially at mentally intense tasks. 
01:32:17 <mad> if garbage collectors are ruled out you're left with, er, basically: C, C++, assembler, delphi, rust, and objective C (and I guess cobol and ada) 
01:32:24 <mad> as far as I can think of 
01:32:38 <pikhq> ... Have you never even heard of Forth? 
01:32:44 <mad> ok and forth 
01:32:56 <pikhq> Or Tcl, for that matter? 
01:32:59 <mad> ok and fortran 
01:33:06 * izabera adds bash to the list of non-garbage-collected languages 
01:33:29 <mad> how is python not garbage collected 
01:33:35 <pikhq> Python is reference counted. 
01:33:50 <mad> also it's dynamic typed which is a much larger speed disadvantage 
01:33:53 <ais523> reference counters fall into a similar category to garbage collectors to me 
01:34:00 <ais523> they have noticeable overhead, often more 
01:34:12 <ais523> the difference being that it's predictable overhead that always happens in the same places 
01:34:12 <pikhq> ais523: They're automatic memory management, but GC is a different technique. 
01:34:27 <ais523> they are not the same, but they have similar effects on a program 
01:34:28 <mad> ""The standard C implementation of Python uses reference counting to detect inaccessible objects, and a separate mechanism to collect reference cycles, periodically executing a cycle detection algorithm which looks for inaccessible cycles and deletes the objects involved."" 
01:34:33 <pikhq> Yes, not the same but similar. 
01:35:37 <mad> reference counting doesn't cause 100ms pauses in your app like the java GC does 
01:36:39 <pikhq> Does Java not have a way of using a more interactive-use-appropriate GC? 
01:36:56 <ais523> you can make hints to Java about when a good time to GC would be 
01:37:10 <mad> ais523 : in a video game, there's never a good time 
01:37:14 <ais523> but a) it doesn't have to respect them, b) you can't delay GC, only make it happen earlier (and hopefully not again for a while) 
01:37:38 <ais523> if you have the memory (and sometimes you do, but not always), you can just leak until the next loading screen and catch all the memory up there 
01:37:38 <mad> if your game has loading screens, yes 
01:38:14 <ais523> although in many, they're disguised, or short enough that you don't really register them 
01:38:22 <tswett> It happens you caught me at a bad time. 
01:38:26 <tswett> I have to go to bed now. 
01:38:31 <ais523> even in the disguised/short ones, a 100ms pause isn't noticeable 
01:38:34 <pikhq> Also, if you have a *good enough* GC, you should be able to only pause for short periods of time between frames. 
01:39:45 <mad> it would still be better to have only ref counting and no GC in that kind of programs though 
01:40:19 <ais523> mad: so if the root of a structure gets freed 
01:40:28 <ais523> you then have a pause while the rest of the structure gets freed recursively 
01:40:32 <ais523> refcounting doesn't remove pauses 
01:40:39 <ais523> simply makes it easier to predict when they'll happen 
01:40:56 <mad> but (1) other threads keep going 
01:41:14 <mad> as opposed to GC which has a "stop the world" phase where it pauses every thread 
01:41:35 <ais523> not necessarily, concurrent GCs exist 
01:41:39 <mad> so chances are the pause will happen on your data loading thread (not your gfx thread) 
01:41:39 <pikhq> That's only true of a subset of GCs. 
01:42:04 <mad> even concurrent GCs do have a "stop the world" phase, it's just much shorter 
01:42:13 <mad> (if what I've read is correct) 
01:42:23 <pikhq> By the same notion, so does malloc because malloc has a mutex. 
01:42:59 <ais523> pikhq: I've managed to deadlock on that mutex before now :-( 
01:43:25 <ais523> let's just say, SDL's situation with timing and concurrency is so bad I've decided to take a look at SFML to see if it's any better 
01:43:46 <pikhq> SDL is... not a well-designed library. 
01:44:48 <mad> yeah SDL is way less good than it should've been 
01:46:26 <boily> pygame makes SDL sane. 
01:47:18 <ais523> boily: does it prevent it polling every 1ms? 
01:48:28 <boily> IIRC, I don't think so. 
01:48:52 <mad> the other thing is that refcounting doesn't have heap compaction 
01:48:56 <mad> which is a good thing 
01:49:22 <pikhq> It's kinda a wash. 
01:49:36 <pikhq> (and orthogonal to refcounting, really) 
01:50:12 <pikhq> Heap compaction costs when it happens, but means the allocator can spend less time in allocation. 
01:50:47 <mad> heap compaction on 300megs of data isn't pretty 
01:51:04 <pikhq> I've forgotten how to count that low. 
01:52:20 <mad> like, it's all fine if it's server software and it doesn't matter if the whole app stops for half a second 
01:52:37 <mad> then, yes, by all means use java and C# and python and whatnot 
01:53:43 <pikhq> If a service pauses for half a second I get paged. 
01:58:14 <shachaf> pikhq: If an individual server has a GC pause of 500ms? 
01:58:40 <pikhq> shachaf: I exaggerate. 
01:58:55 <pikhq> shachaf: But we *do* have SLAs for response time to requests... 
02:00:01 <shachaf> I shouldn't talk about details in here anyway. 
02:00:27 <shachaf> Hmm, I think I know how to set off pikhq's pager. 
02:01:04 <pikhq> Joke's on you, I'm not on call right now 
02:01:36 <shachaf> But is your pager thing actually turned off? 
02:06:00 -!- andrew_ has joined. 
02:06:35 -!- Sgeo has joined. 
02:14:49 -!- hppavilion[1] has quit (Ping timeout: 244 seconds). 
02:15:12 <\oren\> why can't it just also allow else if and elsif? 
02:15:45 <lifthrasiir> probably elif is much used so it is easier to write in that way? 
02:16:24 <\oren\> true but it should allow elif, else if and elsif as alternatives 
02:17:10 * izabera googled it and it's an actual thing 
02:18:54 -!- mysanthrop has joined. 
02:21:48 <\oren\> izabera: why do you think I hate him? 
02:22:12 <HackEgo> mysanthrop: Welcome to the international hub for esoteric programming language design and deployment! For more information, check out our wiki: <http://esolangs.org/>. (For the other kind of esoterica, try #esoteric on EFnet or DALnet.) 
02:23:07 <prooftechnique> I wonder if I can get mutt working on a jailbroken iPhone 
02:23:48 -!- j-bot has quit (Ping timeout: 248 seconds). 
02:23:48 -!- myname has quit (Ping timeout: 248 seconds). 
02:23:48 -!- Alcest has quit (Ping timeout: 248 seconds). 
02:23:49 -!- MoALTz has quit (Ping timeout: 248 seconds). 
02:23:49 -!- nisstyre_ has quit (Ping timeout: 248 seconds). 
02:23:59 <izabera> unless your mutt has a much better interface than mine 
02:24:27 <\oren\> I just use a ssh app and use alpine 
02:24:32 <mad> how do C programmers live without std::vector and std::string 
02:24:32 <izabera> you bought an iphone, you clearly care about eye candy 
02:26:40 <\oren\> mad: i have a bunch of poorly written functions I copy from one project to the next over and over 
02:27:26 <pikhq> ... Or poorly, if you go by the average results. :P 
02:27:32 <mad> reallocate arrays every time they change size? 
02:27:50 <fizzie> Why would you do that if the std::vector implementation doesn't?  
02:28:10 <fizzie> It's not like it's rocket science to have a struct that has "size" and "capacity" separately. 
02:28:25 <mad> fizzie : true but then you might as well use std::vector 
02:28:40 <mad> which does that and it can't leak 
02:29:25 <\oren\> my functions resize them when they get to each power of two 
02:29:57 <mad> \oren\ : that's exactly what std::vector does 
02:30:09 <fizzie> I don't think array resizing is a major source of memory leaks. 
02:30:25 <shachaf> I read this thing that was arguing that powers of two is one of the worst choices you could make. 
02:30:26 <int-e> "new" is your friend if you want to leak memory in C++. ("can't" really is too strong) 
02:30:47 <mad> well, the point is that std::vector<stuff> replaces stuff * 
02:30:55 <mad> stuff * can leak, of course 
02:31:06 <mad> std::vector<stuff> can't 
02:31:07 <int-e> C++ does have a couple of resource management idioms that C doesn't support, but it's far from golden anyway 
02:31:29 <\oren\> I like std::vector. I *HATE* std::ios[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[C[Ctream  
02:31:51 <shachaf> Maybe it was https://github.com/facebook/folly/blob/master/folly/docs/FBVector.md 
02:31:53 <lifthrasiir> iostream is a big raised middle finger to STL 
02:32:22 <lifthrasiir> I cannot really understand how can it be possible to have STL and iostream in the *same* standard 
02:32:22 <mad> int-e : C doesn't have std::vector, that's the real one that's missing and it's a major, major gaping hole 
02:32:53 <pikhq> mad: Anyways, frankly if you think that std::vector is your solution to memory management problems you are too unaware of the problems there are to solve to be having this discussion. 
02:32:56 <mad> lifthrasiir : 80% of the time I simply ignore iostream but use STL anyways 
02:32:59 <prooftechnique> "always non-negative, almost always measurable, frequently significant, sometimes dramatic, and occasionally spectacular" 
02:33:17 <mad> pikhq : if you need a special case then STL won't cut it true 
02:33:54 <mad> but in my experience, "variable sized array" is 10x more common than any other memory structure and its omission from C hurts hard 
02:33:59 <lifthrasiir> mad: yeah. STL is (within its design constraint) well-designed library, while iostream is an epic fail 
02:34:29 <pikhq> It's also by far the easiest data structure to implement, so... 
02:34:36 <\oren\> well, realloc() is basically the equivalent for C 
02:34:49 <\oren\> there's no operator renew 
02:35:10 <mad> pikhq : yeah but you reimplement it so often that it should be a language feature really 
02:35:12 <lifthrasiir> anyone who tried to write a new locale with iostream (to be exact, std::codecvt etc.) will understand that 
02:35:27 <pikhq> Sure, it'd be a nice addition to libc. 
02:35:51 <mad> there are, like, 4 features I care about in C++ 
02:36:29 <mad> std::vector, std::string, std::map, and putting functions in structs/classes for convenience (ie less typing) 
02:36:47 <pikhq> That's the same for everyone. Unfortunately, it's a different 4 for each person, and C++ has enough features that each individual human being gets their own set of 4 features. 
02:36:53 <mad> std::vector is not just a "nice addition", it's a major feature 
02:38:00 <\oren\> I just have a function for appending to an array 
02:38:06 <pikhq> (I suspect that C++ releases new versions to keep up with global population growth) 
02:38:08 <HackEgo> This wisdom entry was censored for being too accurate. 
02:38:27 <mad> pikhq : that is true 
02:39:08 <\oren\> apparr(char**array,int*size,char*part,int*partz); 
02:39:29 <int-e> https://developer.gnome.org/glib/stable/glib-Arrays.html 
02:39:31 <mad> realloc() isn't bad 
02:39:36 <boily> int-e: the mad that was censored isn't the mad that is in the chännel hth. 
02:40:03 <pikhq> Ugh, glib. glib makes C++ look *angelic* in commparison. 
02:40:19 <\oren\> my function does realloc iff size would increase through a power of two 
02:41:04 <mad> \oren\ : yeah. I use std::vector for exactly that except with less potential mistakes 
02:41:06 <\oren\> I don't remember why partz is passed by pointer 
02:41:43 <boily> he\\oren\. more pointers, more stars, more sparkliness. 
02:41:52 <mad> pointers are evil 
02:42:14 <int-e> pikhq: sure but if the objection is that one has to reimplement resizable arrays all the time, that's one of the counterarguments that come to my mind 
02:42:26 <mad> except pointers that are essentially referrences, those are okay 
02:42:27 <pikhq> int-e: Fair enough. :) 
02:42:46 <\oren\> mad: isn't that all pointers? 
02:42:53 <boily> \oren\: I see that you are still fonting ^^ 
02:43:06 <boily> (nice fraktur btw.) 
02:43:09 <\oren\> pointers and references are different words for the same thing 
02:43:27 <mad> \oren\ : well, basically if its pointing to data owned by some other structure, it's okay 
02:44:05 <mad> \oren\ : if it's pointing to a memory allocation and you get a leak if the pointer gets overwritten, then it's bad 
02:44:36 <\oren\> how's that different from references? 
02:45:38 <mad> well, c++ references are typically used in function declarations and they refer to some object 
02:46:05 <mad> you can't use c++ references to do allocation/deallocation so by definition they generally can't be evil 
02:46:29 <boily> it's C++ we're talking about. everything can be alignment-shifted. 
02:46:40 <mad> boily : and then it'll be slow 
02:46:51 <mad> but that's a rare case 
02:46:51 <\oren\> well then what good are they? you need some way to refer to non-stack memory... 
02:47:09 <prooftechnique> If every programmer were as disciplined as that, we'd already be out of work 
02:47:13 <int-e> I bet  delete &ref;  is valid 
02:47:16 -!- nisstyre_ has joined. 
02:47:55 <mad> \oren\ : easy, when you have a function that returns 2 things, one can be returned as a return value but the other has to be a pointer or reference argument and then the called function will write in it 
02:48:05 <mad> that's what references are for 
02:48:17 <mad> they're essentially interchangeable with pointers 
02:48:45 <\oren\> that's what I said, they're just a pointer. 
02:48:54 <mad> internally, c++ references are pointers yes 
02:49:02 <boily> time to have unevil, functional sleep. 'night all! 
02:49:03 <mad> basically they're just syntactic sugar 
02:49:09 -!- boily has quit (Quit: SELFREFERENTIAL CHICKEN). 
02:49:27 <mad> int-e : C++ doesn't guard against messing things up badly :D 
02:49:40 <\oren\> specifically, a int& is the same as a int*const, but with syntax sugar 
02:50:10 <\oren\> allowing you to code as if it's a int 
02:50:49 <int-e> \oren\: and it's much harder to pass in NULL. 
02:50:52 <mad> basically if there's a way to code something with malloc/free/new/delete, and a way that doesn't involve these, I always go for way #2 
02:51:59 <prooftechnique> If you're not writing a custom malloc implementation every time, are you really doing your job? 
02:52:22 <mad> the standard malloc goes through the bucket allocator 
02:52:31 <pikhq> prooftechnique: I have a word for those people, but it's inappropriate for polite conversation. 
02:52:36 <mad> for typical uses it does a pretty good job 
02:52:44 <int-e> prooftechnique: If you're writing a custom malloc implementation every time, are you really doing your job? 
02:52:52 <\oren\> well at my work we use our own resizable array class 
02:53:07 <\oren\> instead of std::vector 
02:53:29 <\oren\> because apparently std::vector doesn't play well with threads or somehting 
02:53:31 <pikhq> The same is true of my work, but at this point I'm a little surprised we don't just have our own implementation of the STL... 
02:54:12 <mad> \oren\ : depends on when it changes size :D 
02:54:23 <HackEgo> NIH was /not/ invented by Taneb. 
02:54:43 <int-e> `culprits wisdom/NIH 
02:54:50 <mad> if you have a size change at the same time another thread looks or writes in the std::vector then you have a problem yes  
02:54:53 <pikhq> int-e: That's practically the Google way. 
02:55:00 <int-e> `culprits wisdom/nih 
02:55:07 <prooftechnique> I'm a little sad that the CPPGM is already running. It seems like it'd be a fun thing to fail at 
02:55:25 -!- ais523 has quit. 
02:55:28 <\oren\> int-e: well half our codebase is in an in-house language instead of c++, and the compile porcess uses another in-house language instead of makefiles, so you know.... 
02:55:30 <shachaf> pikhq: The Google way isn't exactly NIH. They have their own variant of it. 
02:57:06 <mad> \oren\ : basically whenever some std::vector can change size, it needs to be 100% mutexed accessible by only 1 thread, or else you're in trouble 
02:57:18 <mad> the rest of the time it's the same as a C array 
02:58:39 <mad> supposedly copy-on-write containers work well with threading 
02:59:06 <\oren\> i think that's what we have NIHed 
03:00:06 <mad> the other case I've heard is code that had to work on stuff like the nintendo DS 
03:00:09 <\oren\> I haven't looked into the details since the interface is almost exaclt the same as std::vector 
03:00:23 <mad> which if I'm not mistaken had a broken STL or something like that 
03:00:48 <\oren\> this has to work on coffeemachines and things 
03:00:55 <mad> my brother's company has a NIH std::vector equivalent because of that 
03:02:39 <mad> for strings, ironically std::string basically complements char *> 
03:03:09 <mad> char * strings are cool except that you basically can't store them, std::string fixes just exactly that 
03:05:19 <\oren\> can't store them where? 
03:05:37 <mad> well, char * has no storage 
03:05:59 <\oren\> what the heck does that mean? 
03:06:17 <mad> suppose you have to save some text data inside a struct 
03:06:33 <mad> your options are like 
03:07:28 <mad> char text[99]; // + 1 million runtime checks and prayer and hope that it never goes over 
03:08:41 <mad> char *text; // and then make sure it's set to null in every single constructor and make sure it's deleted in the destructor and then checks that it's not null every time you read it and malloc/realloc if it ever changes size 
03:09:12 <mad> std::string text; 
03:10:32 <mad> it's just that option #3 has way less common failure modes than option #1 and option #2 
03:10:48 <\oren\> std::string could be replaced with a bunch of funtions that take char* and handle everything you just said. 
03:11:02 <mad> \oren\ : yes that's option #2 
03:11:08 <mad> char * in the class 
03:11:23 <\oren\> but the point is I already have such functions 
03:11:51 <oerjan> `addquote <shachaf> pikhq: The Google way isn't exactly NIH. They have their own variant of it. 
03:11:58 <HackEgo> 1270) <shachaf> pikhq: The Google way isn't exactly NIH. They have their own variant of it. 
03:12:31 <mad> \oren\ : and you never forget to put them in constructors, destructors, and to put checks against null? 
03:13:12 <\oren\> I don't have constructors or destructors, and all my string handling functions check for null 
03:13:41 <\oren\> (becuase I'm writing in C, which doesn't have constructors or destructors) 
03:13:59 <mad> \oren\ : well, when mallocating and freeing structs of that type then 
03:14:11 <mad> of the type that contains the char * 
03:14:37 <\oren\> well, since my usual first step is somthing like: 
03:15:10 <\oren\> struct foo *f = newfoo(); 
03:16:08 <\oren\> struct foo *f = malloc(sizeof(struct foo)); *f = nullfoo; return f 
03:16:30 -!- oerjan has quit (Quit: Late(r)). 
03:16:43 <\oren\> that doesn't happen, becuase I have a prototype for all foo objects (nullfoo) 
03:17:08 <mad> and you have a deletefoo() matching with every newfoo() ? 
03:18:00 <mad> yeah i guess that works 
03:19:18 <\oren\> I even have some functions that can delete an array, taking a pointer to a delete function to be called on each element 
03:19:47 <mad> makes sense 
03:20:13 <\oren\> it's an obvious extension of the precedent set by qsort and bsearch 
03:20:35 <\oren\> they just didn't bother with it in the C stdlib 
03:20:57 <mad> It's kindof the reverse of my coding style (which could be summarized as "avoid malloc/free unless there's really no other option") but I guess it's sorta functional 
03:21:29 <\oren\> it's what you do if you're writing C and not C++ 
03:21:47 <mad> which makes sense if you're doing embedded coding yes 
03:25:41 -!- nortti_ has joined. 
03:25:42 -!- int-e_ has joined. 
03:26:20 -!- puck1pedia has joined. 
03:26:27 -!- lambda-calc has joined. 
03:26:27 -!- lambda-11235 has quit (Ping timeout: 260 seconds). 
03:26:28 -!- aloril_ has quit (Ping timeout: 260 seconds). 
03:26:29 -!- puckipedia has quit (Ping timeout: 260 seconds). 
03:26:29 -!- Gregor has quit (Ping timeout: 260 seconds). 
03:26:30 -!- nortti has quit (Ping timeout: 260 seconds). 
03:26:30 -!- atehwa_ has quit (Ping timeout: 260 seconds). 
03:26:30 -!- catern has quit (Ping timeout: 260 seconds). 
03:26:30 -!- quintopia has quit (Ping timeout: 260 seconds). 
03:26:30 -!- int-e has quit (Ping timeout: 260 seconds). 
03:26:52 -!- Gregor has joined. 
03:27:35 -!- bender|_ has joined. 
03:27:40 -!- puck1pedia has changed nick to puckipedia. 
03:28:06 -!- aloril_ has joined. 
03:31:06 -!- atehwa has joined. 
03:31:28 -!- ais523 has joined. 
03:31:29 -!- ais523 has quit (Remote host closed the connection). 
03:31:30 -!- j-bot has joined. 
03:37:44 -!- quintopia has joined. 
03:43:29 -!- hppavilion[1] has joined. 
03:43:36 -!- catern has joined. 
04:02:17 -!- hppavilion[1] has quit (Ping timeout: 244 seconds). 
04:12:33 -!- hppavilion[1] has joined. 
04:15:07 -!- ais523 has joined. 
04:15:49 <ais523> OK, so SFML uses a very thread-centric model 
04:16:03 <ais523> e.g. there's no way to inject user-defined events, no way to do timers, etc. 
04:16:51 <ais523> however, it /also/ doesn't define any safe way to communicate between threads, other than mutexes, and I don't think you can form the equivalent of a select() out of mutexes 
04:17:08 * ais523 is in #esoteric, and thus takes questions like "can you create a message queue out of nothing but mutexes" seriously 
04:18:26 <ais523> so the question is, what are the sensible cross-platform ways to merge events coming in from multiple threads, when your threading primitives suck? 
04:20:24 <ais523> note: something you /could/ do entirely within SFML is to create a TCP listening socket and use that, but a) this uses up a global system resource (open ports), b) there's no way to restrict connections to localhost so it's more than a little insecure 
04:20:35 <ais523> (no way within SFML's API, that is; you can obviously do it in TCP) 
04:21:34 <coppro> ais523: define "out of nothing but mutexes" 
04:22:04 <coppro> are we talking about communication via try_lock()?  
04:22:09 <ais523> the only thread-safe blocking primitive that you have available is the mutex lock, which will block if another thread has the mutex locked 
04:22:32 <ais523> the problem isn't transferring the data, because you can do that via shared memory 
04:22:37 <ais523> (which is the default for threading) 
04:22:52 <ais523> the problem is blocking until there's a message ready to receive 
04:23:11 <ais523> and AFAICT, the problem is that you can only try to lock one mutex at a time, a specific thread holds it 
04:23:28 <ais523> and so you're blocked until that specific thread gives you permission 
04:23:32 <ais523> (also you can't do anything meanwhile) 
04:25:04 <ais523> it's basically the opposite situation to the situation for which mutexes were designed; we don't have one process holding the lock and many waiting on it, we have many processes holding the lock and one waiting on one of them to release it 
04:26:07 <mad> isn't SFML a multimedia library? 
04:26:46 <ais523> however this means it contains an event loop 
04:27:09 <ais523> and its event loop uses a "use different threads for different sorts of events" model (implicitly in that it doesn't support timers, has sockets as a separate thing from windows, etc.) 
04:27:18 <ais523> it also supplies threads, and mutexes 
04:27:32 <ais523> but this isn't enough to be able to communicate between threads without polling AFAICT 
04:28:16 <coppro> ais523: yes, I don't think it's possible either 
04:28:31 <mad> I'm not familiar with how it's done in the networking world 
04:28:43 <coppro> under the hood, anyway 
04:29:10 <ais523> so what I want is either a solution a) inside SFML using other primitives it has (IMO impossible), or b) using cross-platform primitives that are widely implemented 
04:29:36 <ais523> I could use pthreads, I guess; however I don't know how that works on Windows/Strawberry 
04:29:59 <ais523> and/or how well it plays with SFML (which after all, has its own threading abstraction) 
04:30:07 <mad> wait, what's the thing you can't do with mutexes? 
04:30:27 <ais523> mad: block until something happens on any of multiple threads 
04:31:10 <ais523> coppro: semaphores would work fine, but SFML doesn't supply them as a primitive 
04:31:18 <coppro> ais523: most platforms do though 
04:31:19 <mad> ais523 : oh I see 
04:31:31 <mad> ais523 : ...what's the application for that? 
04:31:32 <coppro> hard to find something more primitive 
04:32:04 <ais523> mad: the situation is that I am writing a library (libuncursed; coppro's worked on it in the past too) that presents an event-loop interface to programs using it 
04:32:16 <ais523> and abstracts over a number of different backends (currently, POSIX, Windows, and SDL) 
04:32:33 <mad> hmm, how about 
04:33:10 <mad> event handling thread blocks on one mutex 
04:33:11 <ais523> there are others that could be sensible, too (e.g. X, GDI) 
04:33:34 <mad> any of the multiple other threads can unlock that mutex 
04:33:42 <ais523> you can't unlock a mutex unless you hold it, surely 
04:33:48 * ais523 checks to see if SFML have messed this up 
04:34:20 <ais523> hmm, it doesn't say that you can't unlock a mutex while another thread holds it 
04:34:43 <ais523> perhaps it's worth experimenting with 
04:35:00 <ais523> seems vulnerable to race conditions but that maybe isn't insoluble 
04:35:17 <ais523> (e.g. using a separate mutex to protect the signalling one) 
04:35:21 <mad> that mutex would only be used to pause the event handling loop 
04:36:18 <mad> each particular ressource would have its own mutex so that the owner thread of that ressource would unlock its ressource, then unlock the event handling thread's mutex 
04:36:26 <ais523> these mutexes are recursive 
04:36:57 <ais523> the obvious algorithm, assuming you can unlock someone else's mutex, ends with the event handling thread intentionally deadlocking on itself 
04:37:02 <ais523> but you can't do that with a recursive mutex 
04:37:15 <ais523> so we'll have to create a separate thread purely to deadlock it 
04:38:14 <ais523> so three locks (A, B, C), two "special" threads (event and deadlock), N generic threads 
04:38:52 <ais523> netutral state is A locked by deadlock, event waiting on it; B locked by event, deadlock waiting on it; C unlocked 
04:39:18 <ais523> when a generic thread wants to send a message, it locks C, pushes the message on a queue, unlocks A if the queue was empty (this is protected by C), unlocks C 
04:40:35 -!- XorSwap has joined. 
04:41:26 <ais523> when event gets past the deadlock, it locks C, and handles messages from the queue until it's empty; then, hmm 
04:41:31 <ais523> SFML doesn't even have a trylock 
04:41:42 <mad> what sort of use is having a general event handling thread like that for? 
04:41:47 <ais523> so how do we get back into the deadlocked state? 
04:42:33 <ais523> mad: say you want to wait for a key to be pressed, or for 1 second to pass 
04:42:44 <ais523> and the timer thread and keypress handling thread have to be different for some reason 
04:43:53 <mad> that's a bit of a weird test case 
04:43:58 <ais523> your two options are: run the entire logic of the program on whichever thread happened to be the one that received the event (key/timer); or send all the messages to the same thread 
04:44:41 <ais523> it's not a weird test case at all, it's a common enough operation that, say, both ncurses and uncursed provide a function that does exactly that (although ofc the timeout's configurable) 
04:44:58 <ais523> or for another example, say you want to wait for either a keypress, or receiving a network packet 
04:45:44 <mad> multimedia apps often just keep processing video frames and handke keypresses on next frame 
04:46:08 <ais523> that's a common way to write IRC clients (although in this case the responses to a keypress and to a network packet are different enough that you can run them on different threads without too much effort, that isn't something you should have to do) 
04:47:15 <ais523> mad: that's terrible for battery life, though 
04:47:22 <ais523> you want to be able to block until something happens, rather than having to poll 
04:47:31 <ais523> (in fact it's the reason I wanted to move away from SDL in the first place) 
04:48:13 <mad> I guess it depends on if you have the case where your app does nothing when there's no input 
04:48:54 <mad> which I guess is sensible for an irc client but not a game 
04:49:18 <ais523> mad: turn-based games often do nothing when there's no input 
04:49:31 <mad> unless they have audio 
04:50:05 <ais523> audio is one of those things that can safely be run in an independent thread, yes 
04:50:16 <ais523> or interrupt-to-interrupt, on less powerful systems 
04:50:25 <mad> yeah but that means you have at least one always active thread 
04:50:26 <ais523> this is why it's often the only thing that works when the rest of the game crashes 
04:50:43 <ais523> mad: no? audio thread blocks until the sample buffer drains, typically 
04:50:45 <mad> which means that you might as well do polling on your event handler thread 
04:50:52 <ais523> there's only so much the audio thread can do before blocking 
04:51:02 <mad> ais523 : yes, which happens at least 50 times per second 
04:51:05 <ais523> you're not running in a busy loop calculating samples 
04:51:12 <\oren\> do you have any primitive atomics on shared memory? 
04:51:37 <ais523> also 50fps is still slower than a typical video framerate 
04:51:47 <ais523> \oren\: std::atomic would work in this case, I think 
04:51:51 <\oren\> (although last time I touched that stuff I got terrible radiation burns) 
04:53:07 <mad> depends on what you mean by "atomic" 
04:53:56 <ais523> mad: a variable that supports operations that cannot be interfered with by other threads 
04:54:01 <mad> for typical cases it's really the operations you do on your primitive that are atomic, I guess... and yeah I guess std::atomic does this for you 
04:54:05 <ais523> there are a range of atomic operations, some more useful than others 
04:54:18 <ais523> test-and-set is a common example of a primitive that's powerful enough to build anything else 
04:54:36 <ais523> (set-to-specific-value, that is, not set-to-1) 
04:55:03 <mad> yeah, the equivalent of lock cmpxchg? :D 
04:55:04 <\oren\> yeah I think we used a swap operation in my OS class 
04:55:29 <\oren\> or maybe a compare and swap? 
04:55:52 <pikhq> Surely CAS. Just swap isn't sufficiently general I don't think. 
04:56:13 <ais523> pikhq: IIRC pure swap is sufficiently general, but much more complex to use 
04:56:26 <mad> I think it needs the compare to handle the case where some other thread has changed the value 
04:56:33 <mad> between the read and the write 
04:56:37 <ais523> pikhq: you can construct a boolean test-and-set out of a swap by swapping in a 0 or 1 
04:56:47 <ais523> swapped-out value is the test, swapped-in value is the set 
04:56:54 <pikhq> And you don't find hardware without CAS really, so it's not worth the effort. 
04:56:55 <\oren\> yeah we used just swap 
04:57:24 <\oren\> the OS ran on some sort of virtual machine 
04:57:35 <ais523> you basically use the test-and-set as a mutex to guard a non-atomic operation on shared memory 
04:57:43 <ais523> I think you might have to spin until the value is not set any more, though 
04:58:04 <mad> how does swap guarantee that some other thread hasn't changed the value after your read but before your write? 
04:58:05 <\oren\> yup, that's what we did, I remeber it now 
04:58:29 <ais523> mad: atomic swap guarantees that because atomic 
04:58:47 <\oren\> i think maybe it just freezes the other processors? who knows 
04:58:49 <ais523> hmm, so SFML on Linux, at least, uses pthreads 
04:59:09 <ais523> \oren\: it actually uses quite a complex locking mechanism internally 
04:59:22 <ais523> the processors will block on the lock on the memory address if they try to access the same address 
04:59:29 <ais523> there might also be some memory barriers involved 
04:59:47 <\oren\> well, in my course we were on a vitual  machine, so who knows 
04:59:52 <mad> ais523 : but you can't prevent the swap if the value has changed 
05:00:10 <mad> suppose you're trying to do an atomic increment 
05:00:14 <mad> value is 0 
05:00:22 <ais523> mad: you don't do the swap on the value you're incrementing 
05:00:26 <ais523> you do it on a second, guard value 
05:00:37 <ais523> which is 1 while in the middle of an increment, and 0 the rest of the time 
05:00:43 <ais523> to increment, first you swap the guard value with 1 
05:00:48 <\oren\> maybe cmpxchg is better for real processors because you don't need so much locking 
05:01:07 <pikhq> cmpxchg lets you have atomics without having a second guard value like that. 
05:01:13 <ais523> if you swapped a 0 out of it, then you do the increment, and swap a 0 back in (and will get a 1 after your swap unless shenanigans) 
05:01:16 <mad> \oren\ : cmpxchg lets you do atomic increment without a guard value yeah 
05:01:30 <ais523> if you swapped a 1 out of it, then you try again; you swapped a 1 with a 1 so you didn't interfere with the process that's currently doing the increment 
05:01:49 <\oren\> so they made us do it with swap only because it's harder 
05:01:52 <ais523> with compare-and-swap, what you do is you first (nonatomically) read the value, say it's x 
05:02:01 <ais523> then you swap in x+1 if the current value is x 
05:02:12 <ais523> if you swapped an x out, everything is fine, you're done 
05:02:13 <mad> ais523 : but what if you have a 1 and then a third thread comes in? then the third thread will see a false 0 
05:02:34 <ais523> if you didn't, then try again, you didn't change anything as you did a read and a failed-CAS 
05:03:22 <mad> wait I guess I see 
05:03:27 <ais523> here's my program: /*x*/ while (swap(guard, 1)); /*y*/ val++; /*z*/ swap(guard, 0) 
05:03:50 <mad> yeah that works if the cpu doesn't reorder memory writes 
05:04:10 <ais523> and an atomic swap is normally assumed to contain appropriate memory barriers 
05:04:18 <ais523> to protect anything that's ordered relative to it 
05:04:29 <mad> which means it should work on x86 but not necessarily other platforms 
05:04:34 <ais523> (either in the processor architecture itself, or because it's a wrapper for the instruction + the barrier) 
05:04:56 <pikhq> mad: The underlying instruction, sure, but any real-world use would have the appropriate memory barrier. 
05:04:56 <mad> ais523 : as opposed to cmpxchg which.... doesn't really need barriers I think? 
05:05:12 <pikhq> Because it's not at all helpful if it's not a synchronization primitive. :) 
05:05:43 <ais523> mad: well it depends on what the memory sequencing properties of the compare-and-swap are 
05:05:53 <ais523> it needs to contain at least a barrier on the things it's swapping 
05:06:09 <ais523> but really you need them in order to avoid time paradoxes 
05:06:16 <mad> well, the point of compare-and-swap is to have memory order guarantees against some other thread also doing compare-and-swap on the same value 
05:06:34 <mad> so presumably it has at least some kind of barrier against itself 
05:07:05 <pikhq> That's the "lock" prefix on x86. 
05:07:21 <pikhq> Without it, cmpxchg isn't atomic WRT other threads. :) 
05:07:25 -!- lleu has quit (Quit: That's what she said). 
05:07:28 <ais523> something that happens in Verity at the moment (assignment in Verity is atomic but has no barrier): new x := 0 in new y := 0 in {{x := 1; y := 2} || {y := 1; x := 2}}; print(!x); print(!y) 
05:07:41 <ais523> can print 1 1 even if you had a barrier betwen the parallel assignment and the prints 
05:08:25 <ais523> this is because there's no barrier between the assignments to x and to y, and in particular, the four assignments can happen /literally/ simultaneously, in which case it's unspecified which ones win 
05:08:46 <mad> that seems normal? 
05:09:03 <pikhq> Yes, but it's weird to people used to x86's memory model. 
05:09:15 <ais523> mad: well there isn't any way to interleave {x := 1; y := 2} and {y := 1; x := 2} that leaves both variables set to 1 
05:09:33 <mad> x := 1  happens 
05:10:04 <pikhq> Reordering is fun. 
05:10:11 <ais523> pikhq: it's not even reordering 
05:10:15 <mad> the print() stuff happens on the 2nd thread? 
05:10:15 <ais523> it's just simultaneity 
05:10:23 <ais523> mad: || is a thread split + join 
05:10:24 <mad> after the x:=2 
05:10:41 <mad> where's the join? 
05:10:49 <ais523> i.e. I temporarily fork into two threads, one does {x := 1; y := 2} and the other does {y := 1; x := 2} 
05:10:57 <ais523> || is a fork + join operator 
05:11:15 <mad> I guess you're right, that can't happen in the x86 memory model 
05:11:23 <mad> unless the compiler reorders the writes 
05:11:35 <mad> (which afaik it totally can) 
05:11:38 <ais523> in Verity, the compiler doesn't reorder the writes, it's just that all four happen at the exact same time 
05:11:58 <ais523> mad: right, in gcc you'd need a compiler barrier 
05:12:02 <pikhq> The x86 memory model is one of the stronger ones out there. 
05:12:07 <ais523> like "asm volatile ();" 
05:12:17 <ais523> to prevent gcc reversing the order of the assignments to x and to y 
05:12:23 <mad> pikhq : they probably had no choice :D 
05:12:31 <mad> considering all the apps out there 
05:12:40 <ais523> well most programs out there at the time were single-threaded 
05:12:41 <pikhq> ais523: I'm not sure if that's actually a full compiler barrier. 
05:12:51 <ais523> asm volatile (:::"memory") 
05:12:53 <pikhq> I tend to use asm volatile("" ::: "memory"); 
05:13:45 <mad> there's probably less compiler memory op reordering on x86 though 
05:13:53 <mad> due to the structure of the instruction set 
05:13:56 <pikhq> mad: It's actually a fairly arbitrary choice, given that it would *only* effect programs and OSes that were aware of multiprocessing, and when introduced this was very close to 0. 
05:15:04 <mad> I remember that when real multiprocessor systems started to happen there were a few apps that started failing 
05:15:12 <mad> not that many tho 
05:15:56 <ais523> hmm, Verity's || operator was called , in Algol 
05:16:02 <pikhq> Yes, they'd be ones that used threads incorrectly. 
05:16:11 <ais523> Verity is an Algol derivative, after all, so it's not surprising it has one 
05:16:28 <mad> is {x := 1; y := 2} implicitly unordered? 
05:16:28 <ais523> however, it's surprising that it isn't seen more often in modern languages 
05:16:32 <pikhq> Hence why it would be not that many -- threading is a bit niche without multiprocessor systems. 
05:16:48 <ais523> assignment to x happens before, or simultaneously with, assignment to y 
05:17:08 <mad> 'or simultaneously with' 
05:17:27 <ais523> a write to a variable cannot happen simultaneously with a write or read that comes earlier 
05:17:41 <ais523> and if a write and read happens simultaneously you get the new value 
05:17:45 <ais523> there, those are Verity's timing rules 
05:17:54 <pikhq> ais523: Huh, that's actually kinda-sorta related to C's , introducing a sequence point, then, isn't it? 
05:17:58 <ais523> (by simultaneously, I mean on the same clock edge) 
05:18:08 <pikhq> Erm, no, no it isn't. 
05:18:28 <ais523> pikhq: for if you want even more detail on how it works: 
05:18:40 <ais523> it's call-by-name so naming a variable can be seen a bit like a function call 
05:18:48 <ais523> and the same call can't return twice on the same cycle 
05:19:07 <ais523> however, for "simple" reads of variables the call can be optimized out 
05:19:35 <ais523> (it just looks at the bits in memory directly) 
05:20:10 <mad> if all read/writes in a group are to different variables, they can happen all at the same time? 
05:20:29 <mad> then I guess they can be reordered no? :D 
05:20:38 <ais523> "the same call can't return twice on the same cycle" is the /only/ rule slowing the program down (apart from some corner cases wrt recursion) 
05:20:44 <ais523> mad: no, in x := 1; y := 2 
05:20:49 <ais523> the write to y can't happen before the write to x 
05:20:57 <ais523> it happens simultaneously (same clock cycle) or later 
05:21:30 <ais523> (in this particular case it would be simultaneous because 2 is a constant, and thus there's nothing that could delay the write to y) 
05:22:49 -!- bender|_ has changed nick to bender|. 
05:22:57 -!- bender| has quit (Changing host). 
05:22:57 -!- bender| has joined. 
05:23:03 <mad> what if you had x := some_calculation; y := 2 
05:23:06 <ais523> fwiw I consider this behaviour to potentially be a bug, but we've decided that for the time being at least it isn't (also it makes the program run faster, which is a good thing in the abstract) 
05:23:21 <ais523> mad: x and y would be assigned at the same time, when the calculation completed 
05:23:39 <ais523> meanwhile x := 2; y := some_calculation would assign x first, start the calculation that cycle, and assign y when the calculation completed 
05:23:44 <ais523> which might or might not be that cycle 
05:23:52 <mad> what about 
05:24:06 <mad> x := some_calculation; y := some_calculation 
05:24:48 <mad> how much of y's calculation can overlap with x's calculation? 
05:24:55 <ais523> runs the calculation, when it finishes delays one cycle; then assigns the result to x and starts running the calculation again, when it finishes assigns the result to y 
05:25:32 <ais523> note the "delays one cycle", this is automatically inserted to fulfil the rule that prevents the same block of code being used for two different purposes at the same time 
05:25:49 <mad> what about 
05:25:56 <mad> x := some_calculation; y := some_other_calculation 
05:26:12 <ais523> those could happen on the same cycle (unless the two calculations involve shared resources) 
05:26:25 <ais523> obviously, they only would if some_other_calcuation took zero cycles 
05:26:40 <ais523> as some_other_calculation doesn't start until some_calculation has finished 
05:26:42 <ais523> and to complete the set 
05:26:50 <ais523> x := some_calculation || y := some_other_calculation 
05:27:04 <ais523> would run both calculations in parallel regardless of what arguments they took or how long they took 
05:27:57 <mad> is this designed for some specific piece of hardware? :D 
05:29:19 <ais523> pretty much the opposite: it designs specific pieces of hardware 
05:29:29 <ais523> to run the program you entered 
05:29:37 <ais523> e.g. via programming an FPGA 
05:29:46 <mad> does it compile to verilog or something like that? 
05:30:03 -!- lynn has joined. 
05:30:27 <ais523> and ofc the big advantage of designing hardware is that you can do things in parallel for free 
05:30:36 <ais523> so long as you don't need access to shared resources 
05:31:18 <ais523> one of my coworkers is looking into rewriting "x := a; y := b" as "x := a || y := b" if it can prove that the two programs always do the same thing 
05:31:32 <ais523> which would give a big efficiency gain without requiring people to place all the || in manually 
05:31:51 <mad> that sounds like an aliasing resolution problem 
05:32:10 -!- dingbat has quit (Quit: Connection closed for inactivity). 
05:33:37 <mad> the standard approach to that is renaming but then it can parallelize the variables but not the name changes 
05:33:38 <ais523> well, much of our theoretical research has been in that direction 
05:33:53 <ais523> in particular, we statically know whether any two things can share or not 
05:34:08 <ais523> we don't have aliasing problems because Verity disallows storing anything other than integers in pointers 
05:34:15 <ais523> *integers in variables 
05:34:21 <ais523> (in particular, you can't store a pointer in a variable) 
05:36:38 <mad> how does it know what to put in dram, block ram and in logic fabric registers? 
05:39:20 <ais523> arrays go in block ram, non-array variables in logic fabric (unless a large number of copies are required due to, e.g., them being local to a recursive function) 
05:39:31 -!- lambda-calc has changed nick to lambda-11235. 
05:39:32 <ais523> dram isn't used by the language itself but you could write a library to access it 
05:39:51 <ais523> (assuming you're talking about external ram) 
05:39:59 <ais523> ("d" could expand in more than one way here) 
05:45:10 -!- bender| has quit (Remote host closed the connection). 
05:45:37 <mad> is "array[x] := n || array[y] := m" a compilation error? 
05:46:39 <ais523> yes but only because arrays use () for indexing rather than [] 
05:47:02 <ais523> although, interestingly, "array(x) := n || array(y) := m || array(z) := l" will give you a warning 
05:47:24 <ais523> the reason is that you can't do more than two writes to block RAM simultaneously in hardware 
05:47:39 <mad> yeah obviously 
05:47:45 <ais523> and thus it has to add extra components to serialize the writes so that no more than two happen at a time 
05:48:40 <mad> what mode does it use the bram's port in? read_before_write? 
05:49:03 <ais523> "warning: made 3 copies of an array's read/write ports" "info: at most two read/write ports can be supported efficiently" 
05:49:10 <ais523> and read-before-write, yes 
05:49:27 <ais523> not that it matters, all that changes is the behaviour in race conditions 
05:50:37 <ais523> that said, I'm currently working on implementing pipelining 
05:51:00 <ais523> in which case "array(x) := n || array(y) := m || array(z) := l" would do the writes on three consecutive cycles and thus you wouldn't get the warning 
05:51:23 <mad> but then your throughput would go down :D 
05:53:21 <ais523> yes; this is something we might want to look at later 
05:56:27 <mad> I've been really into trying to find an alternative to RISC/CISC/VLIW for practical CPUs 
05:58:29 <mad> it's hard to balance between too static-scheduled (VLIW being simple but stalling easily etc) and too dynamic-scheduled (RISC/CISC start breaking down majorly over about 4 instructions per cycle) 
05:59:13 <ais523> as this is #esoteric, I'm wondering if there are any other alternatives 
05:59:36 <ais523> even if it's a pretty hppavilion[1] reaction to the problem 
05:59:46 <mad> I have some interesting designs but nothing approaching the simplicity of RISC 
06:00:21 <ais523> what about a CPS processor? 
06:00:34 <ais523> i.e. "run this command, once it finishes running, do this other thing next" 
06:00:45 <ais523> although that's pretty similar to hyperthreading, really 
06:01:04 <mad> it falls down on what exactly a "command" is :D 
06:01:10 <ais523> and there's a reason processors don't run entirely on hyperthreading 
06:01:52 <mad> I thought hyperthreading was basically just a way to keep the cpu active when loads have fallen out of data cache and it's that or stalling 
06:02:29 -!- XorSwap has quit (Quit: Leaving). 
06:02:38 <mad> or, in the case of sparc, a way of wiggling their way out of doing an out-of-order while keeping okay performance :D 
06:03:32 <mad> ais523 : what runs in parallel in a CPS processor? 
06:04:12 <ais523> mad: I guess you can start multiple commands (well, opcodes) running at the same time 
06:04:18 <ais523> basically via the use of a fork opcode 
06:04:42 <ais523> the question is, do we also need a join, or do we just exit and run the code for its side effects? 
06:04:58 <mad> how do you tell if the opcodes are truly independent or have dependencies? 
06:06:01 -!- lynn has quit (Read error: Connection reset by peer). 
06:06:35 <mad> the approach I've been looking at is extremely small "threads" 
06:06:42 <mad> like, 3 instruction long for instance 
06:07:24 <ais523> you don't have to, you just run them whenever they become runnable 
06:07:56 <ais523> I guess that if you add join, this is basically just a case of an explicit dependency graph 
06:08:08 <mad> if your commands do loads/stores on the same memory you need to know what happens 
06:08:13 <ais523> which is a bit different from VLIW 
06:08:20 <ais523> but similar in concept 
06:08:54 <mad> VLIW dependency is handled by keeping everything in some exact known sync 
06:09:53 <mad> compiler scheduler knows the sync and fills the instruction slots 
06:10:25 <mad> generally it works well for DSP code (lots of multiplies and adds etc) but not well at all for load-store-jump code 
06:10:33 <mad> which is why VLIW is typically used in DSPs 
06:11:10 <ais523> well I'm basically thinking of the Verity model but on a CPU 
06:11:33 <mad> some CPUs simply run all loads and stores in-order 
06:11:36 <ais523> if two things don't have dependencies on each other, you run them in parallel 
06:11:44 <mad> everything else can be reordered willy-nilly though 
06:12:20 <ais523> this means that the CPU needs to be able to handle large numbers of threads at once (probably a few hundred in registers, and swapping if the registers get full), and needs very cheap fork/join 
06:12:23 <mad> ais523 : true, but if your two things are memory addresses calculated late in the pipeline, it's very hard to tell that they have dependencies 
06:12:35 <ais523> OTOH, so long as you have enough threads available, you don't care much about memory latency, only bandwidth 
06:12:46 <ais523> just run something else while you're waiting 
06:12:59 <ais523> this is similar to GPUs but GPUs are SIMD at the lowest levels, this is MIMD 
06:13:20 <ais523> mad: well the dependencies would be calculated by the compiler 
06:13:36 <mad> compiler can only calculate so many dependencies 
06:13:39 <ais523> ideally via the use of a language in which aliasing problems can't happen 
06:14:02 <hppavilion[1]> ais523: ALIW and OLIW are some alternatives to RISC, CISC, and VLIW 
06:14:03 <mad> in fact the ideal situation for the compiler is that loads and stores never move 
06:14:14 <mad> every other instruction is easy to move 
06:14:30 <ais523> in most practical languages, though, loads and stores happen a lot 
06:14:42 <ais523> hmm, can we invent some sort of functional memory for functional languages? 
06:14:44 <mad> it's just calculations and it's all in SSA form so it knows exactly what depends on what and how to reorder stuff 
06:14:53 <ais523> i.e. memory never changes once allocated, it can go out of scope though 
06:14:59 <hppavilion[1]> ais523: I thought of that once- the ASM of Haskells 
06:15:08 <mad> what I was thinking of was C++ with absolutely no pointers 
06:15:23 <mad> and every object or array is copy-on-write 
06:15:31 <ais523> there have been some experiments of getting it to run on CPU 
06:16:02 <mad> no dynamic typing or garbage collection or other slow features 
06:16:15 <hppavilion[1]> ais523: What other properties should the FMM have? 
06:16:27 <mad> only copy-on-write because it's the one thing that can prevent aliasing 
06:17:06 <ais523> mad: not the only thing, you can use clone-on-copy instead 
06:17:09 <ais523> it's just slower usually 
06:17:36 <ais523> (it's faster for very small amounts of data, around the scale of "if you have fewer bits in your data than you do in an address") 
06:17:41 <mad> but then don't you need references if you use clone-on-copy 
06:18:30 <mad> references so that you can point to objects that you're going to read from without doing tons of copies 
06:18:40 <ais523> I didn't say it was efficient 
06:19:11 <mad> that's why I'm suggesting copy-on-write 
06:19:26 <ais523> hppavilion[1]: the main problem with a functional memory model is handling deallocation 
06:19:39 <ais523> you can a) use reference counts, b) use a garbage collector, c) clone on copy 
06:19:54 <ais523> method c) is used by most esolang impls AFAIK 
06:20:15 <mad> what do haskell etc use? 
06:21:44 <ais523> mad: normally garbage collectors, for most workloads it's the most efficient known solution 
06:21:57 <ais523> although it requires a lot of complexity to get it more efficient than reference counting 
06:22:41 <mad> can functional programming generate cycles? 
06:22:43 <ais523> I personally like reference counting, especially because it allows you to implement an optimization whereby if something is unaliased at runtime (i.e. the reference count is 1), you can just change it directly rather than having to copy it first 
06:23:07 <mad> that's what copy-on-write is no? 
06:23:27 <ais523> there are language features which can cause cycles to be generated; however, some functional languages don't include those features 
06:24:02 <ais523> copy-on-write doesn't necessarily check for refcount 1, some implementations check for never-cloned instead 
06:24:25 <ais523> which means that you don't have to update the refcount when something leaves scope 
06:24:41 <mad> but what if it was cloned but then the clone went out of scope? 
06:24:47 <mad> then you have a useless copy 
06:25:07 <ais523> but without a refcount you don't know it's useless until the next gc cycle 
06:25:46 <mad> the idea of having COW everything is that also when you need a copy, typically you only need a copy of the topmost layer 
06:25:58 <ais523> it's possible that the extra copies are faster than the refcount updating 
06:26:02 <mad> ie an object containing a bunch of sub-objects 
06:26:14 <ais523> most likely because you're just copying a wrapper that contains a couple of pointers 
06:26:24 <mad> if you have to copy the object, you don't need any copy of the sub-objects 
06:26:31 <mad> except the ones that are really different 
06:26:32 <ais523> and yes, I think we're making the same point here 
06:27:33 <mad> how expensive is refcounting anyways? 
06:27:37 <mad> it's just +/- 
06:27:53 <ais523> it's pretty expensive because it screws up your cache 
06:28:12 <ais523> whenever something gets copied or freed, you have to a) dereference it, b) write a word of memory next to it 
06:28:37 <ais523> which means that less fits in your cache, and copy and free operations end up bumping something into cache that isn't immediately needed 
06:28:47 -!- mysanthrop has changed nick to myname. 
06:28:48 <mad> isn't it reading in 1 cache line that's probably going to be read by whatever next object operation on that object? 
06:29:02 <ais523> for a free, you probably aren't planning to use the object again for a while ;-) 
06:30:00 <mad> well, for a free you start by -- refcount, checking it, it's 0, then you have to go through the whole destructor so that's more accesses to object variables no? 
06:31:42 <ais523> oh, you're assuming there's a nontrivial destructor 
06:31:54 <ais523> I'm not, destructor is often trivial 
06:32:23 <mad> well, it must decrease child object refcounts no? 
06:32:31 <ais523> yes, /but/ we're comparing refcounting to GC 
06:32:34 <mad> and eventually call free() 
06:32:41 <ais523> GC doesn't need to decrease the child object refcoutns 
06:33:47 <ais523> so it doesn't have a need to pull the object into cache 
06:34:10 <ais523> fwiw, I think there's little doubt that refcounting is better if you have a lot of nontrivial destructors 
06:34:15 <ais523> but that doesn't come up very often 
06:35:37 -!- lambda-11235 has quit (Quit: Bye). 
06:37:00 <mad> it sounds like it depends on the "shape" of the objects you're freeing 
06:37:14 <mad> depending on average size and average number of levels 
06:38:33 <mad> other issue is 
06:38:50 <mad> suppose you have some large global object with some error logger in it 
06:39:27 <mad> some function of some small object within that global object does whatever 
06:39:33 <mad> and then logs an error 
06:40:09 <mad> how do avoid forcing the user to make the function take the large global object as an explicit argument? :D 
06:41:01 <ais523> this is one of the largest problems in OO, possibly programming generally 
06:41:12 <ais523> there are a lot of proposed solutions but I'm not sure if any of them are actually good ones 
06:41:46 <mad> I know only the C++ solution, which is that you store a pointer to the large global object in the small object 
06:41:53 <mad> but then that breaks any purity 
06:42:18 <ais523> look up dependency injection, it's crazy 
06:42:38 <mad> and it introduces a reference cycle 
06:42:59 <ais523> err, dependency injection frameworks 
06:43:11 <ais523> dependency injection itself is just the concept of passing the large global as an argument 
06:43:19 <ais523> but the interest comes from doing it /implicitly/ 
06:43:36 <ais523> normally via some sort of code transformation, either at compile-time or run-time 
06:43:39 <ais523> (which is why it's crazy) 
06:44:20 -!- nortti_ has changed nick to nortti. 
06:47:04 <mad> without solving aliasing then basically you're designing a cpu for executing C++ 
06:47:54 <mad> and I don't think it's possible to design a cpu for higher level languages 
06:48:31 <mad> because C++ tends to have all the real low latency operations basically 
06:48:58 <mad> and in particular the ones that have few sideeffects 
06:49:04 <mad> side effects are deadly 
06:50:26 <ais523> well I don't think a language can be considered higher-level nowadays if it doesn't provide at least some way to manage side effects 
06:51:03 <mad> dunno, aside from functional languages 
06:51:23 <mad> my impression is that most high level languages have great tools for CAUSING side effects 
06:52:02 <mad> witness all the perl-python-lua-js type of languages that never even got multithreading 
06:55:11 <mad> I can't think of any approach other than multithreading and functional-style-purity for managing side effects 
06:55:32 <mad> especially long-term side effects 
06:56:25 <mad> for short term side effects generally you have the whole LLVM style thing where it uses SSA on non-memory values and then LLVM-style alias resolution loads/stores 
06:56:33 <mad> and...that's it! 
06:57:27 <mad> unless you count SIMD as a form of side-effect management 
06:57:32 <mad> (which I guess it is!) 
06:58:04 -!- dingbat has joined. 
07:01:10 <mad> that's why the MIPS is still the "top" design in a way 
07:01:30 -!- Sprocklem has joined. 
07:04:32 <ais523> mad: well Verity compiles via an intermediate language SCI, which has the property that aliasing will fail to compile 
07:04:51 <ais523> although it sacrifices quite a lot to accomplish that 
07:05:54 <mad> well, it compiles to vhdl so it's essentially a low level language no? 
07:05:55 -!- carado has joined. 
07:06:50 <ais523> mad: Verity is low level, yes 
07:07:04 <ais523> however the principles behind SCI were originally expressed in a language which was (at the time, at least) pretty high level 
07:10:40 <mad> if you're going towards agressive threading then the target kind of cpu is pretty clear 
07:10:50 <mad> stick in a bunch of in-order RISCs 
07:10:58 <mad> as many as you can fit 
07:11:32 <mad> each new core = new DCACHE = 1 more potential load per cycle 
07:11:50 <mad> or 2 loads if you have a 2 port DCACHE 
07:12:38 <ais523> I think you also need to have more threads "ready to go" than you do CPUs 
07:12:53 <ais523> so that you can suspend some while waiting for memory access, branch prediction failure, etc. 
07:12:59 <mad> you'll probably want some degree of hyperthreading to fill in stalls 
07:13:09 <ais523> actually if you have enough hyperthreads you needn't even bother to predict branches 
07:13:19 <ais523> just run something meanwhile while working out whether to take them or not 
07:14:28 <mad> I think the branch predictor is worth the trouble 
07:14:41 <mad> it's not that complex at low IPC 
07:15:05 <mad> also at low IPC your pipeline is likely to be short 
07:16:00 <mad> this is basically the ultraSPARC 
07:16:28 <mad> oriented towards load-store-jump code that has lots of threads 
07:16:31 <mad> ie servers 
07:17:26 <ais523> you could totally write a compiler to use lots of threads if they were that lightweight 
07:17:34 <ais523> and they'd be very load-store-jump-mimd heavy 
07:18:15 <mad> you'd need some sort of threading that doesn't have to go through the OS's scheduler 
07:19:08 <mad> and get people to use tons of small threads in their code 
07:19:35 <ais523> the latter is something that'll be increasingly necessary to increase performance as time goes on 
07:19:59 <ais523> and hardware thread scheduling is a natural extension of that 
07:20:20 <mad> the problem is that generally if the OS's scheduler is involved, that probably already wipes out your potential benefits in lots of cases 
07:20:42 <b_jonas> ais523: have you looked at Rust? I don't remember if it came up yet and whether I've told my first impression opinions. 
07:20:47 <mad> also there's a limit to how much threading you can get going 
07:21:08 <ais523> b_jonas: yes, this channel used to have a lot of rust discussion 
07:21:11 <mad> every cpu you add to a system makes the synchronization system between core memories harder 
07:21:24 <ais523> that said, I don't think I know your opinion on Rust, either because you haven't told me or because I've forgotten 
07:22:08 <mad> that's starting to sound like the PS3's CELL :D 
07:23:11 <ais523> it was ahead of its time 
07:23:51 <ais523> NUMA is going to get more and more popular as time goes on, basically because there just isn't really any other option if we want computers to keep getting faster in terms of ability-to-execute-programs 
07:24:11 <mad> there's always aggressive SIMD 
07:24:48 <mad> which gives you nothing for load-store-jump programs 
07:25:07 <mad> but I don't think anything's going to help load-store-jump programs by this point 
07:25:55 <b_jonas> mad: simd and numa have different roles. they both help, and I'm very interested in simd, but at some point even if you write optimal simd programs to reduce memory and cache load, you'll run out of memory bandwidth, and numa is the only technically realistic way to increase it 
07:26:02 <ais523> the problem with SIMD is that although it's good for some workloads, those are typically the workloads you'd run on a GPU 
07:26:15 <b_jonas> ais523: that's not quite true 
07:26:18 <ais523> so it's more of a stopgap until people get better at writing multithreaded programs 
07:26:31 <mad> CELL worked because video games have some mathy calculations to offload 
07:26:57 <b_jonas> ais523: it's that people are buying into the GPU hype and very few people are trying to learn to actually use SIMD and cpu programming in a good way 
07:27:16 <b_jonas> (this is partly why I'm very interested about it) 
07:27:23 <mad> you can put hundreds of cores on a CPU if they can't access any memory :D 
07:27:33 <b_jonas> ais523: yes, there's some overlap, but still, I don't think GPUs will solve everything 
07:27:59 <mad> gpus solve one problem, rendering video games 
07:28:30 <mad> other problems might see a speed gain only as much as they look like video game rendering :D 
07:28:34 <ais523> GPUs actually have similar levels of SIMDiness to CPUs; their strength is that they can run the same code on thousands of threads, but not necessarily with the same control flow patterns 
07:29:12 <mad> as far as I can tell the GPU's advantage is that basically memory writes only happen to the frame buffer 
07:29:18 <ais523> they're bad at pointer-heavy stuff, and in general, at things with unpredictable memory access patterns 
07:29:24 <mad> so GPUs have essentially no aliasing to solve 
07:29:55 <ais523> mad: they have block-local storage, which is basically a case of manually-controlled caching 
07:30:01 <ais523> where you load and flush the cache lines manually 
07:30:15 <mad> once aliasing comes into the picture (or heavy feedback loops) CPUs take the upper hand afaik 
07:30:46 <b_jonas> I might be dismissing gpu stuff too much due to how overhyped it is 
07:31:08 <ais523> mad: it's mostly just that GPUs are bad at pointers 
07:31:27 <mad> it comes down to how few GPU-able problems there are I think 
07:31:27 <ais523> aliasing isn't any harder than dereferencing nonaliased memory, they're both hard 
07:32:19 <mad> aliasing forces your memory operations to be in-order basically 
07:32:36 <mad> and adds lots of heavy checks the more you reorder your operations 
07:33:08 <mad> eventually you end up with giant content-addressable-alias-resolution buffers and whatnot 
07:33:31 <mad> and everything becomes speculative 
07:33:51 -!- mroman has joined. 
07:34:17 <ais523> well how useful is unpredictable aliasing from a program's point of view? 
07:34:25 <lifthrasiir> b_jonas: SIMD is a good fit for "occasional", "one-off" computations. GPGPU is a good fit for "pervasive" large computations. people seems to easily confuse the differences. 
07:34:57 <ais523> lifthrasiir: hmm: what would you say is the best way to zero a large amount of RAM? 
07:34:59 <mad> ais523 : it's mandatory to guarantee correctness 
07:35:06 <lifthrasiir> (and when one needs occasional large computations, one is advised to avoid them) 
07:35:08 <ais523> mad: not from the compiler's point of view 
07:35:22 <ais523> how often do you write a program that benefits from aliasing, and can't predict where it happens in advance? 
07:35:42 <ais523> lifthrasiir: that didn't seem that stupid to me 
07:35:58 <ais523> I was actually thinking that systems might benefit from a dedicated hardware memory zeroer 
07:36:10 <ais523> Windows apparently zeroes unused memory in its idle thread 
07:36:18 <lifthrasiir> ais523: but I think it is not a good way to approach the problem. why do you need a large amount of zeroed memory after all? 
07:36:29 <ais523> as something to do (thus it has a supply of zeroed memory to hand out to programs that need it) 
07:36:59 <lifthrasiir> then I guess SIMD or other OS-sanctioned approach is the necessary 
07:37:05 <ais523> lifthrasiir: basically a) because many programs ask for zeroed memory; b) you can't give programs memory that came from another program without overwriting it all for security reasons, so you may as well overwrite with zeros 
07:37:11 <mad> well, if you write to a variable, eventually you're going to want to read from it 
07:37:20 <mad> fundamentally that's aliasing 
07:37:26 <ais523> GPGPU could zero GPU memory quickly just fine; the problem is that it uses different memory from the CPU 
07:37:30 <ais523> and the copy between them would be slow 
07:38:18 <lifthrasiir> DMA is a joke, but the hardware-wired way to zero memory may be somehow possible even in the current computers 
07:38:23 <ais523> mad: yes but often both pointers are literals (because you use the same variable name both times), so the aliasing is predictable 
07:38:31 <mad> for instance, a delay buffer for an echo effect 
07:38:44 <mad> how fast it aliases depends on the delay time you've set 
07:39:11 <ais523> yes, that's a good example of a "memmove alias" 
07:39:19 <mad> ais523 : aliasing isn't predictable if you use very large array indexes :D 
07:39:43 <ais523> I'm kind-of wondering, if restrict was the default in C, how often would you have to write *unrestrict to get a typical program to work 
07:39:49 <ais523> mad: larger than the array, you mean? :D 
07:40:34 <mad> yeah but the cpu doesn't know the array size 
07:40:43 <mad> most of the time even the compiler doesn't know 
07:41:01 -!- tromp has quit (Remote host closed the connection). 
07:41:12 <ais523> mad: well that at least is clearly something that can be fixed by higher-level languages 
07:41:20 <mad> there's also the case of, well, you're accessing a class that has pointers in it 
07:41:38 <mad> and it's hard to tell when your code will read out one of those pointers and write to that data 
07:42:14 <ais523> you do know what restrict means, right? 
07:42:21 -!- AnotherTest has joined. 
07:42:34 <ais523> "data accessible via this pointer parameter will not be accessed without mentioning the parameter in question" 
07:42:35 <mad> ais523 : higher-level languages can abuse references to cause surprise aliasing 
07:43:05 <mad> I wasn't aware of the exact semantics of restrict 
07:43:07 <ais523> example? mostly because it'll help me understand what you're considering to be higher-level 
07:44:15 <mad> consider a java function working on some array 
07:44:23 <b_jonas> “<ais523> [GPUS] they're bad at pointer-heavy stuff, and in general, at things with unpredictable memory access patterns” – are they also bad at unpredictable local sequential access of memory, such as decoding a jpeg-like huffmanized image that's encoded as 256 separate streams, you have an offset table for where the huffman input of each stream and the output of each stream starts,  
07:44:38 <b_jonas> and within one stream, you can read the huffman input and the output pixels roughly sequentially? 
07:44:41 <mad> then it reads some member variable in one of the objects it has as an argument 
07:45:03 <mad> the member variable is a reference to the same array the java function is working on 
07:45:09 <mad> and it uses it to poke a value 
07:45:38 <b_jonas> “<ais523> I'm kind-of wondering, if restrict was the default in C, how often would you have to write *unrestrict to get a typical program to work” – isn't that sort of what Rust is about? 
07:45:42 <ais523> b_jonas: so long as what you're indexing is either a) stored in memory that's fast to read but very slow to write, or b) fits into block memory (basically a manually-controlled cache), you can dereference pointers 
07:46:02 <b_jonas> and I don't think that's how restrict in C works 
07:46:02 <ais523> b_jonas: it's similar, yes 
07:46:25 -!- AnotherTest has quit (Ping timeout: 240 seconds). 
07:46:38 <ais523> mad: that's nothing to do with Java being high-level, IMO 
07:46:55 <mad> this example applies to most non-pure languages 
07:47:03 <ais523> storing a reference to something inside the thing itself is a pretty low-level operation 
07:47:05 <mad> like perl and python and whatnot 
07:47:23 <mad> well, your function gets some array argument 
07:47:24 <ais523> actually, if you do that in Perl, you're supposed to explicitly flag the reference so as to not confuse the garbage collector 
07:47:30 <mad> and some object 
07:47:43 <mad> and the object has a reference to the array but you don't know 
07:48:25 <b_jonas> ais523: well, if there are 256 streams, and you're decoding only one channel at a time and assembling the three channels later in a second pass, then each stream should be at most 8192 bytes long, its output also 8192 bytes long, plus there's a common huffman table and a bit of control information. 
07:48:36 <mad> there's no self reference in my example 
07:49:06 <ais523> mad: well, say, in SCI (which is designed to avoid aliasing), if you give a function two arguments, any object can only be mentioned in one of the arguments 
07:49:08 <b_jonas> Oh, and some local state for each 8x8 block that might take say 512 bytes. 
07:49:15 <mad> b_jonas : isn't hufman decoding inherently sequential? 
07:49:26 <b_jonas> (I'm assuming a 2048x1024 pixel image, 8 bit depth channels.) 
07:49:51 <b_jonas> mad: yes, but if you use a shared huffman table and you mark where each stream starts in the input and output, then you can decode each stream separately 
07:50:20 <b_jonas> mad: that is actually practicaly for image decoding, and also for image encoding or video de/encoding, but those get MUCH hairier and more complicated 
07:50:22 <mad> ais523 : if it avoids aliasing then it's in a different category 
07:50:40 <ais523> mad: I'm saying that putting limits on aliasing is higher-level than not putting limits on aliasing 
07:50:46 <b_jonas> mad: note that this is pure huffman encoding, like jpeg, not deflate-like copy operations from a 16k buffer of previous output. 
07:50:48 <ais523> because it means that you have more information about the data you're moving around 
07:51:07 <b_jonas> mad: the copy operations are why PNG/zip decompression is really impossible to parallelize or implement fast these days 
07:51:39 <b_jonas> gzip/zip/PNG made lots of sense when they were invented, but less sense for today's hardware 
07:52:03 <ais523> b_jonas: deflate uses references to locations earlier in the output, right? how much would it change if it used references to locations as they were in the input file? 
07:52:03 <b_jonas> but JPEG is just as old and ages much better, which is why most modern video formats are similar to it, even if different in lots of specifics 
07:52:17 <ais523> in terms of compression ratio 
07:52:17 <mad> b_jonas : I guess it works if you have multiple huffman segments that you know the start of 
07:52:44 <b_jonas> ais523: I'm not sure, I don't really know about modern compression algorithms, and it probably depends on what kind of data you have.  
07:52:50 <ais523> that seems to be GPU-acceleratable, although I haven't worked out the details yet 
07:52:50 <lifthrasiir> mad: actually I managed to persue my friend to write the similar thing with the existing deflate stream 
07:53:32 <mad> doesn't every huffman symbol basically depend on the previous one? 
07:53:42 <b_jonas> ais523: encoding a video also references previous frames, but in a way than I think is much nicer than gzip, because you only reference one or two previous frames, so you can decode per frame. it might still get ugly. 
07:53:45 <mad> or specifically the length of the previous one 
07:54:06 <lifthrasiir> mad: the point is that DEFLATE uses the end code that is distinctive enough that it can be scanned much quicker 
07:54:16 <lifthrasiir> then the friend stucked on the LZ77 window :p 
07:55:05 -!- andrew_ has quit (Remote host closed the connection). 
07:55:07 <mroman> has anyone ever done some graph related database stuff? 
07:55:09 <lifthrasiir> (it was a term project AFAIK, and the friend did get A even though the prototype was only marginally faster) 
07:55:19 <b_jonas> Maybe I should write a toy image format and encoder and decoder, just to learn about how this stuff works, even if I don't get anything practically usable. 
07:55:24 <lifthrasiir> (since everyone else was doing JPEG decoder stuff) 
07:55:33 <ais523> mroman: I looked into it a bit for aimake 4 
07:55:39 <ais523> but didn't reach the point where it came to actually write the code 
07:55:42 <b_jonas> (There are already lots of practical image coders out there.) 
07:55:43 <ais523> so so far, all I have is plans 
07:56:20 <mroman> let's assume I have paths in my database A -> B -> D and A -> C -> D 
07:56:26 <mad> ais523 : I think "non aliasing" for higher language tends to be a synonym for "pure/no side effects" and often "functional" or maybe even "lazy-evaluated functional" 
07:56:52 <ais523> mad: err, the Haskell-alikes have tons and tons of aliasing 
07:56:52 <mroman> and I want to know for example if there's a traffic jam on A -> D 
07:56:58 <ais523> they're just constructed so that it never matters 
07:57:08 <mad> it doesn't HAVE to be this way but afaik all the "no side effects" languages are functionnal 
07:57:10 <lifthrasiir> mad: to be more exact: DEFLATE stream stores the (encoded) tree in the front, and the tree is structured so that every prefix code is ordered by the length of code and then by the lexicographical order. since the end code is least frequent it should appear at the very end, i.e. all 1s. 
07:57:26 <mad> ais523 : afaik haskell has no real aliasing? 
07:57:43 <ais523> > let x = 4 in let y = x 
07:57:44 <lambdabot>  <hint>:1:14: parse error in let binding: missing required 'in' 
07:57:51 <ais523> > let x = 4 in let y = x in y 
07:58:05 <ais523> actually GHC probably optimized the aliasing there out 
07:58:13 <lifthrasiir> mad: the typical stream has 10--14 one bits for the end code, so the decompressor may try to speculatively decode the stream from that point 
07:58:24 <ais523> but x and y would be aliases in a naive Haskell implementation 
07:58:31 <ais523> there's just no way to tell from within Haskell itself 
07:58:34 <lifthrasiir> (and the project was for CELL processor, quite amenable for this kind of things) 
07:58:56 <ais523> because if two things alias, the normal way you tell is either to use a language primitive that tells you that, or to modify one and see if the other changes 
07:59:17 <mad> ais523 : yes but they're basically not real aliases because you can't write in one and get surprise changes in the other 
07:59:20 <mroman> the traffic jam could be between A -> B, B -> D, A -> C, C -> D or A -> D itself 
08:00:00 <mad> multiple readonly pointers to the same block of memory isn't a problem 
08:00:00 <ais523> mroman: huh, that's an interesting operation 
08:00:12 <ais523> mad: keep going and you'll invent Rust ;-) 
08:00:24 <mroman> other questions are: Are there paths from A to D that are not equally fast. 
08:00:26 <mad> the problem is when one of this pointers writes something 
08:00:42 <mad> and it's impossible to say which other pointers will see the write 
08:01:10 <mad> at local level it's usually possible to figure it out (LLVM's alias solving does this) 
08:01:18 <mad> at global level it becomes impossible 
08:01:23 <ais523> mroman: the SQLite docs have an example of doing transitive closure via a recursive query 
08:01:47 <ais523> I'm not sure if the performance is better or worse than running Dijkstra's algorithm from outside with a series of queries 
08:01:56 <mad> that's one of x86's "voodoo" advantages 
08:02:05 <b_jonas> ais523: I have to afk for some hour now, but I can tell my preliminary opinion on rust later. 
08:02:08 <mad> it doesn't require memory reordering to perform well 
08:02:14 <ais523> (the constant factor should be better, but the asymptotic performance might be worse if it's using a bad algorithm) 
08:02:52 <mad> if it was possible to do more efficient memory reordering then x86 would be gone by now 
08:03:41 <mad> some RISC or VLIW would have been twice as fast as x86 and everybody would be switching 
08:05:41 <mad> as it is, the best cpu design practice, as far as I can tell, is to assume that loads/stores aren't going to move, and rearrange basically everything else around them 
08:07:56 <mad> result: out-of-order execution 
08:10:04 <mad> itanium tried to do compile time rearranging with some complex run-time checking+fallback mechanism 
08:10:06 <mad> and it failed 
08:15:57 -!- Elronnd has quit (Quit: Let's jump!). 
08:21:21 -!- Elronnd has joined. 
08:41:33 -!- tromp has joined. 
08:46:18 -!- tromp has quit (Ping timeout: 276 seconds). 
08:54:12 -!- hppavilion[1] has quit (Ping timeout: 244 seconds). 
09:00:14 -!- bender| has joined. 
09:04:30 -!- olsner has quit (Ping timeout: 276 seconds). 
09:09:20 -!- ais523 has quit. 
09:21:06 -!- AnotherTest has joined. 
09:25:57 -!- AnotherTest has quit (Ping timeout: 268 seconds). 
09:29:33 -!- J_Arcane has quit (Ping timeout: 240 seconds). 
09:30:48 -!- olsner has joined. 
09:36:34 -!- olsner has quit (Ping timeout: 240 seconds). 
09:38:37 <HackEgo> [wiki] [[Talk:Brainfuck]]  https://esolangs.org/w/index.php?diff=46491&oldid=46410 * Rdebath * (+4885) Shortest known "hello world" program. -- Define "shortest"! 
09:45:55 -!- andrew_ has joined. 
09:59:25 -!- andrew_ has quit (Remote host closed the connection). 
10:13:17 -!- nisstyre_ has changed nick to nisstyre. 
10:13:27 -!- nisstyre has quit (Changing host). 
10:13:27 -!- nisstyre has joined. 
10:16:26 -!- AnotherTest has joined. 
10:19:11 -!- int-e_ has changed nick to int-e. 
10:25:59 -!- AnotherTest has quit (Ping timeout: 260 seconds). 
10:35:23 -!- olsner has joined. 
10:42:11 -!- tromp has joined. 
10:45:42 -!- jaboja has joined. 
10:46:18 -!- tromp has quit (Ping timeout: 244 seconds). 
11:37:27 -!- boily has joined. 
11:42:25 -!- jaboja has quit (Ping timeout: 240 seconds). 
12:16:30 <boily> FUNGOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOT! 
12:17:04 <HackEgo> fungot is our beloved channel mascot and voice of reason. 
12:18:56 <boily> FireFly: MASCOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOT! 
12:19:10 <boily> oops, wrong autocompletion. 
12:19:34 <boily> fizzie: MASCOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOT! FUNGOOOOOOOOOOOOOOOOOOOOOOOOT! !?!???!?!?!!???!!!!!! 
12:23:22 -!- boily has quit (Quit: NONPLUSSING CHICKEN). 
12:51:19 -!- jaboja has joined. 
12:53:43 -!- fungot has joined. 
13:02:49 <Taneb> fungot, how are you doing 
13:02:49 <fungot> Taneb: i'm sure it appeared on l:tu or winxp? ;p 
13:09:29 -!- oerjan has joined. 
13:36:29 -!- spiette has joined. 
13:48:08 -!- AnotherTest has joined. 
13:56:01 -!- jaboja has quit (Ping timeout: 240 seconds). 
14:28:25 -!- Alcest has joined. 
14:30:40 -!- zadock has joined. 
14:42:14 <oerjan> @tell mad <mad> can functional programming generate cycles? <-- in haskell it can, e.g. lst = 1 : lst defines a cyclic list, which is nevertheless immutable. (Technically you can in ocaml too, but only for simple constant initializers.) 
14:52:33 -!- `^_^v has joined. 
14:56:24 -!- lambda-11235 has joined. 
15:24:49 -!- UrbanM has joined. 
15:28:03 <UrbanM> hi please check out my website . http://sh.st/RptZh... ty :) i promise its not a virus 
15:28:45 -!- tromp has joined. 
15:30:45 <UrbanM> hi please check out my website . http://sh.st/RptZh... ty :) i promise its not a virus 
15:32:13 -!- ChanServ has set channel mode: +o oerjan. 
15:32:34 -!- oerjan has set channel mode: +b *!*Master@*.38.31.175.cable.t-1.si. 
15:32:34 -!- oerjan has kicked UrbanM You are not _our_ Urban M. 
15:33:03 -!- tromp has quit (Ping timeout: 244 seconds). 
15:39:29 <int-e> oerjan: of course the immutability of Haskell is a lie. 
15:39:54 <int-e> (I'm alluding to thunk updates.) 
15:41:21 <int-e> brainfuck guy... yes 
15:41:27 <int-e> https://esolangs.org/wiki/Urban_M%C3%BCller 
15:41:59 <int-e> (ah, there was a question mark before the ellipsis. I typed that, then googled to confirm.) 
15:42:55 <int-e> however... the user above looked more like and imposter 
15:44:04 <int-e> sh.st... "shorten urls and learn money"... sounds legitimate 
15:49:51 <int-e> so what do we get... googla analytics, tons of ads, some trackers, and did they actually put a captcha before the embedded link? 
15:50:06 <int-e> (I'm looking at page source code) 
15:51:06 <int-e> and there's a ton of javascript I haven't looked at. 
15:52:57 -!- XorSwap has joined. 
15:55:41 -!- lambda-11235 has quit (Quit: Bye). 
15:57:42 <oerjan> int-e: thus i also mentioned ocaml hth 
15:57:47 -!- oerjan has set channel mode: -o oerjan. 
16:00:26 <oerjan> btw does ghc allocate a thunk for a simple lst = 1 : lst; lst :: [Int] 
16:06:41 -!- bender| has quit (Ping timeout: 250 seconds). 
16:06:51 <izabera> jobs outside of italy are so hard to grasp 
16:08:16 -!- augur has joined. 
16:09:31 -!- mroman has quit (Quit: Lost terminal). 
16:12:06 -!- oerjan has quit (Quit: Later). 
16:24:24 -!- augur has quit (Remote host closed the connection). 
16:24:58 -!- augur has joined. 
16:29:38 -!- augur has quit (Ping timeout: 250 seconds). 
16:40:29 <int-e> @tell oerjan btw does ghc allocate a thunk for a simple lst = 1 : lst <-- wow, apparently not (checked assembly output from ghc-7.10.2 with -O2, native code gen) 
16:43:06 <int-e> @tell oerjan even ghc-7.6.3 didn't allocate a thunk, that's as far back as I can easily go 
16:50:47 -!- zzo38 has joined. 
16:55:38 -!- Treio has joined. 
17:04:31 -!- jaboja has joined. 
17:15:35 -!- Treio has quit (Quit: Leaving). 
17:17:03 -!- XorSwap has quit (Ping timeout: 240 seconds). 
17:44:11 -!- XorSwap has joined. 
17:54:06 -!- augur has joined. 
18:06:44 -!- augur has quit (Remote host closed the connection). 
18:09:19 -!- lambda-11235 has joined. 
18:14:01 -!- MoALTz has joined. 
18:33:44 <izabera> https://github.com/bloomberg/bucklescript 
18:38:19 -!- lleu has joined. 
18:39:38 -!- augur has joined. 
18:46:04 -!- heroux has quit (Ping timeout: 264 seconds). 
18:46:47 -!- XorSwap has quit (Ping timeout: 244 seconds). 
18:49:59 -!- augur has quit (Read error: Connection reset by peer). 
19:08:07 -!- zadock has quit (Quit: Leaving). 
19:11:01 -!- lynn has joined. 
19:14:12 -!- heroux has joined. 
19:21:10 -!- XorSwap has joined. 
19:31:22 -!- hppavilion[1] has joined. 
19:40:45 <shachaf> Did you work out those categories? 
19:43:54 <hppavilion[1]> shachaf: I'm currently trying to figure out the type of the arrows in example (A) 
19:44:12 <hppavilion[1]> ("Type" may not be the correct word, but it gets the point across if I send this message) 
19:44:24 <shachaf> The type of an arrow from A to B is A -> B 
19:44:46 <hppavilion[1]> shachaf: Yeah, I mean I'm trying to figure out what they represent 
19:45:12 <hppavilion[1]> shachaf: I think the only thing I've figured out is that in (A), composition represents the transitive property of ≤ 
19:45:40 <shachaf> What does identity represent? 
19:45:59 <hppavilion[1]> shachaf: The fact that a value is less than or equal to itself 
19:50:15 -!- lambda-11235 has quit (Ping timeout: 264 seconds). 
19:51:17 <hppavilion[1]> shachaf: Wait, do arrows just represent arbitrary relations? 
19:51:22 <shachaf> An arrow doesn't have to represent anything. 
19:52:05 -!- lambda-11235 has joined. 
19:52:06 <shachaf> Sometimes an arrow is just a cigar. 
19:52:32 <int-e> hppavilion[1]: you can interpret any relation on a set as a directed graph with that set as nodes (allowing loops, not allowing multiple edges) 
19:52:56 <shachaf> Arrows don't have to represent functions, no. 
19:52:58 <hppavilion[1]> I don't smoke, so if it is a type of cigar I wouldn't get the joke 
19:53:02 <shachaf> Or transformations, whatever that is. 
19:53:19 <int-e> but you really need reflexivity and transitivity to make a category that way 
19:53:21 <hppavilion[1]> shachaf: Do arrows have to mean something, or can they just be arrows? 
19:53:35 <int-e> they can be just arrows 
19:54:29 <int-e> I don't know what example (A) refers to. 
19:54:57 <int-e> well, arguably the underlying relation gives the arrow *some* meaning 
19:55:12 <int-e> it's really a philosophical question at this point. 
19:55:36 <hppavilion[1]> int-e: But do they not represent anything in the way Set has arrows representing functions? 
19:56:38 <hppavilion[1]> int-e: Or could it be argued that they represent Void? xd 
20:01:01 -!- Phantom_Hoover has joined. 
20:17:43 -!- lambda-11235 has quit (Quit: Bye). 
20:19:04 -!- XorSwap has quit (Ping timeout: 252 seconds). 
20:42:54 -!- p34k has joined. 
20:46:09 <zzo38> To allow other program to change resources of a window in the X window system, you could have the other program appends a null-terminated string to a property on that window, and then that client watches that property and reads and deletes it and adds that string into the resource manager. You can also send commands that aren't resources too in the same way, by adding a prefix to specify 
20:47:31 <zzo38> Add RESOURCE_MANAGER into the WM_PROTOCOLS list to specify that this function is available, I suppose. 
20:48:03 -!- spiette has quit (Ping timeout: 240 seconds). 
20:48:17 <zzo38> Does it make sense to you? 
20:52:17 <zzo38> The format of the property must be 8, the type must be STRING, and the mode must be PropModeAppend. 
21:03:01 -!- spiette has joined. 
21:05:16 -!- `^_^v has quit (Quit: This computer has gone to sleep). 
21:17:12 -!- augur has joined. 
21:24:33 -!- augur has quit (Ping timeout: 240 seconds). 
21:30:16 -!- ais523 has joined. 
21:33:19 -!- hppavilion[1] has quit (Ping timeout: 252 seconds). 
21:33:54 -!- hppavilion[1] has joined. 
21:34:16 -!- spiette has quit (Quit: :qa!). 
21:35:00 -!- spiette has joined. 
21:39:22 -!- hppavilion[1] has quit (Ping timeout: 252 seconds). 
21:47:31 -!- J_Arcane has joined. 
22:04:25 -!- AnotherTest has quit (Quit: ZNC - http://znc.in). 
22:43:27 -!- spiette has quit (Quit: :qa!). 
22:47:13 -!- jaboja has quit (Remote host closed the connection). 
22:50:31 <b_jonas> I'm trying to line up the Szabó Lőrinc translation and the original of Tennyson: Ulysses exactly. But it turns out the translation is one line shoter. 
22:50:50 <b_jonas> It's missing the line that would correspond to “Death closes all: but something ere the end,” 
23:08:27 -!- ais523 has quit. 
23:27:37 -!- oerjan has joined. 
23:27:46 -!- shikhin has changed nick to shikhun. 
23:28:17 -!- shikhun has changed nick to shikhin. 
23:28:21 <lambdabot> int-e said 6h 47m 51s ago: btw does ghc allocate a thunk for a simple lst = 1 : lst <-- wow, apparently not (checked assembly output from ghc-7.10.2 with -O2, native code gen) 
23:28:21 <lambdabot> int-e said 6h 45m 14s ago: even ghc-7.6.3 didn't allocate a thunk, that's as far back as I can easily go 
23:34:03 -!- FreeFull has quit (Quit: Rebooting). 
23:36:13 -!- FreeFull has joined. 
23:41:15 -!- b_jonas has quit (Ping timeout: 250 seconds). 
23:42:17 -!- b_jonas has joined. 
23:43:30 -!- p34k has quit. 
23:43:37 -!- hppavilion[1] has joined. 
23:47:52 -!- boily has joined. 
23:48:12 <boily> fungot: fungellot. 
23:48:13 <fungot> boily: it is edited to remove the 0 parameter? i was thinking 
23:48:20 <boily> fungot: no you weren't. 
23:48:21 <fungot> boily: ( code is 2.1mb so that may not have 
23:48:26 <fungot> boily: wait a minute!! i am having trouble with this stub generator. it has. 
23:48:47 <boily> fungot: stub generators suck, stubs suck, and asynchronous services especially suck. 
23:48:47 <fungot> boily: sperber was here mar 17 at 11:11 pm utc, saying: or check out file-select and tcp-listener-fileno 
23:48:53 <oerjan> "it has." seems a bit too stubby indeed. 
23:50:22 <shachaf> @@ @tell oerjan @@ @@ (@where weather) ENVA KOAK 
23:50:58 <lambdabot> Plugin `compose' failed with: <<timeout>> 
23:51:28 <shachaf> boily: Good afternoon, person. 
23:52:04 <oerjan> boily: i dunno but he was there mar 17 hth 
23:52:25 <shachaf> mar 17 hasn't happened yet 
23:52:43 <oerjan> then why is fungot using past tense, duh 
23:52:44 <fungot> oerjan: with the procedure for-each? ie i have a question about static links. i really should read up on macros? like atom? 
23:53:13 <boily> time to a fungot is an irrelevant concept hth 
23:53:14 <fungot> boily: i don't apply this level of dynamic typing... it mentioned that static typing is in the browser while allowing quick access to the enclosing command. 
23:53:26 <oerjan> fungot: are you a dreen 
23:53:26 <fungot> oerjan: because bash gets exactly 3 parameters with that invocation, and 0 added to any number of arguments, you have 
23:54:41 <fungot> shachaf: some may.....but not all. but many more possibilities than chess. many. most things just work. at least now atm 
23:58:26 * boily wraps fungot in a chicken costume 
23:58:27 <fungot> boily: and i think he said some weird things involving crazy symbols and actions. i'm purely interested in the same ballpark, and roughly between chicken and stalin might be one way of doing that 
23:59:20 -!- grabiel has joined.