00:02:05 -!- TheColonial has joined.
00:04:22 -!- nooga has quit (Ping timeout: 246 seconds).
00:05:37 <Sgeo> kmc, when you asked me how many interviews I had, were you counting telephone interviews?
00:05:48 <Sgeo> Because if you were, I did have one (before I got the upcoming one today)
00:13:39 -!- nooga has joined.
00:21:12 -!- pikhq has quit (Ping timeout: 264 seconds).
00:30:55 -!- doesthiswork has quit (Quit: Leaving.).
00:36:19 -!- nooodl has quit (Ping timeout: 248 seconds).
00:38:31 <elliott> um i thought i had a question
00:39:29 <HackEgo> THECOLONIAL: WELCOME TO THE INTERNATIONAL HUB FOR ESOTERIC PROGRAMMING LANGUAGE DESIGN AND DEPLOYMENT! FOR MORE INFORMATION, CHECK OUT OUR WIKI: HTTP://ESOLANGS.ORG/WIKI/MAIN_PAGE. (FOR THE OTHER KIND OF ESOTERICA, TRY #ESOTERIC ON IRC.DAL.NET.)
00:40:33 <HackEgo> ThEcOlOnIaL: wElCoMe tO ThE InTeRnAtIoNaL HuB FoR EsOtErIc pRoGrAmMiNg lAnGuAgE DeSiGn aNd dEpLoYmEnT! fOr mOrE InFoRmAtIoN, cHeCk oUt oUr wIkI: hTtP://EsOlAnGs.oRg/wIkI/MaIn_pAgE. (FoR ThE OtHeR KiNd oF EsOtErIcA, tRy #EsOtErIc oN IrC.DaL.NeT.)
00:40:53 <Bike> do you come in a set with ThePostcolonial
00:40:55 * oerjan pats his question dissolving raygun
00:40:59 <HackEgo> /home/hackbot/hackbot.hg/multibot_cmds/lib/limits: line 5: exec: welcome: not found
00:41:12 <Lumpio-> I was pretty sure there was an obnocious fullwidth one
00:41:16 <TheColonial> Bike I've never been asked before, but I'll see if I can drum something up :)
00:41:38 <Bike> or is it ThePostColonial
00:41:41 <Bike> i just don't know
00:41:53 <elliott> sounds like a really shitty band
00:41:54 <HackEgo> TheColonial: Welcome to the international hub for esoteric programming language design and deployment! For more information, check out our wiki: http://esolangs.org/wiki/Main_Page. (For the other kind of esoterica, try #esoteric on irc.dal.net.)
00:42:15 <Bike> elliott: but they're so sensitive to modern issues!!
00:42:24 <Lumpio-> Can "colonial" refer to the colon?
00:42:25 <TheColonial> I stumbled on this channel in a rather esoteric way, how fitting.
00:42:27 <oerjan> welcome to the welcome bubble
00:42:47 <TheColonial> Lumpio-: sometimes I sign in to places which chop usernames to 8chars.. I end up as "TheColon".
00:44:07 <TheColonial> I see someone has had a fair bit of fun with the welcome script.
00:44:25 <TheColonial> So is this place packed with PLT geeks? :)
00:45:10 <Bike> do you count as a PLT geek if you can say "homotopy type theory" without stuttering
00:45:43 <TheColonial> I believe I can, but I reckon that doesn't make me a PLT geek!
00:48:37 <oerjan> i can say it without stuttering, but i don't actually know what it means!
00:48:50 <elliott> i thought oerjan knew everything...
00:48:51 * oerjan knows some other type theory though
00:49:32 <oerjan> elliott: my intuition knows everything, but it is very bad at communicating with my rational mind...
00:49:38 <TheColonial> I actually stumbled on this channel after a Google search coughed this up http://tunes.org/~nef//logs/esoteric/13.01.03
00:50:49 <Bike> esoteric, where even your segfaults are turing-complete
00:50:55 <elliott> who would still join this place after getting logged evidence of how terrible it is??
00:51:17 <TheColonial> Well, that says a lot about me doesn't it.
00:51:37 <TheColonial> I'm actually interested in the discussion between kmc and shachaf
00:52:48 <TheColonial> so do you guys all spend time working on crazy languages of your own?
00:53:03 <elliott> we spend roughly 0% of our time working
00:53:24 <elliott> if you're looking for sitting around and complaining about things though then this is your channel
00:54:43 -!- nooga has quit (Ping timeout: 256 seconds).
00:56:44 -!- pikhq has joined.
01:07:31 -!- zzo38 has joined.
01:07:32 -!- ogrom has quit (Quit: Left).
01:11:35 <HackEgo> TheColonial: Welcome to the international hub for esoteric programming language design and deployment! For more information, check out our wiki: http://esolangs.org/wiki/Main_Page. (For the other kind of esoterica, try #esoteric on irc.dal.net.)
01:19:18 <shachaf> Bike: I think you have to at least know what it is.
01:19:51 <elliott> it's type theory that uses homotopy
01:20:28 <shachaf> I went to dolio's talk about it once.
01:20:37 <shachaf> http://comonad.com/reader/wp-content/uploads/2011/10/slides.pdf
01:43:14 -!- TeruFSX has joined.
01:49:01 <TheColonial> shachaf: do you have a second for a question?
01:49:58 <TheColonial> I'm actually interested in a discussion between you and kmc... found it here: http://tunes.org/~nef//logs/esoteric/13.01.03
01:53:35 <shachaf> I maintain that ask your question into the abyss of #esoteric and see what happens.
01:59:34 -!- GOMADWarrior has joined.
02:03:01 -!- carado has quit (Ping timeout: 246 seconds).
02:07:16 -!- fftw has quit (Ping timeout: 248 seconds).
02:07:52 -!- fftw has joined.
02:15:50 <HackEgo> olist: shachaf oerjan Sgeo
02:16:15 <shachaf> oerjan: I don't see an update since Sgeo's `olist?
02:16:48 <shachaf> Which you thanked him for, so presumably you've seen it?
02:17:03 <oerjan> maybe i'm just permanently 1 behind y'all :P
02:17:05 <shachaf> (Or maybe that was another cache issue?)
02:17:21 <oerjan> after his `olist i found #874, which was new to me then
02:17:36 <oerjan> i thought it _was_ fixed, since new ones appeared...
02:17:37 <shachaf> When I saw his `olist I saw 875
02:18:28 <shachaf> Do what I do: Disable your cache.
02:20:02 <oerjan> maybe there is something weird about how giantitp.com does things
02:20:34 <oerjan> if i disable cache, won't everything else frequently visited load slowly
02:26:09 -!- monqy has joined.
02:29:41 -!- madbr has joined.
02:30:10 <madbr> the guy in that ARM paper said that all architectures have warts... so true
02:30:49 <shachaf> oerjan: By "disable cache" I mean that I do everything in Incognito Mode in Chromium.
02:30:59 <shachaf> So all cache/cookies/etc. is lost between sessions.
02:31:43 <oerjan> oh hm maybe i could visit _just_ oots in privacy mode...
02:32:02 <madbr> context : I just learned that MIPS has an add opcode that causes an exception if the result overflows :O
02:32:20 * oerjan has never used that mode before, afahr
02:32:37 <Bike> madbr: what, integer add?
02:32:56 <shachaf> I don't remember how it goes.
02:33:05 <oerjan> i recall seeing such an option
02:33:06 <madbr> bike : it's just as crazy in float but yeah integer add
02:33:49 <Bike> haha, what the hell? why?
02:34:10 <madbr> it has a non-exception causing add too tho
02:34:11 <Fiora> being able to trap overflows in a language without extra overhead ,I'm guessing?
02:34:22 <Fiora> in x86 you'd have to do a "jo" after every single add
02:34:28 <madbr> fiora : yeah but in practice there's no point
02:34:46 <madbr> there's no language where that's a sensible thing to do
02:35:01 <Fiora> ... I thought lisp had to check for overflow on its adds?
02:35:30 <Fiora> sounds kinda nice for debugging too, though now there's that IOC thing
02:35:35 <madbr> what, overflowing ops are automatically promoted to bignums?
02:35:54 <Fiora> um. bike would know more
02:36:09 <madbr> fiora : doing a "jo" after every add is probably faster actually
02:36:25 <Fiora> I'm guessing the jo is faster if it ever actually happens
02:36:33 <Fiora> and slower if it doesn't?
02:36:47 <madbr> unless it's using leftover silicon
02:37:53 <madbr> but normally if you're actually in the kind of case where it would be useful, you're probably in some non-numerical code
02:38:12 <madbr> that is full of branches and memory loads
02:38:33 <Bike> yeah, dropping to bignums makes everything way slow
02:39:24 <madbr> yeah, just like in floating point when you have INF or NAN
02:39:36 <madbr> or denormals, those are the WORST
02:39:38 <Bike> I doubt CL is the only system that would want dropping to bignums to be easier, though
02:39:43 <Bike> does anyone actually like denormals?
02:40:30 <madbr> the stupid handling of denormals on x86 is a bad problem in sound handling code
02:42:05 <Fiora> it's that thing where if you end up with denormals, it gets like 100x slower, right?
02:42:05 <madbr> it's like HELLO YOUR PROCESSING IS SUDDENLY HUNDREADS OF TIMES SLOWER WHOOPS YOU MISSED THE SOUNDBUFFER DEADLINE OH WELL AT LEAST YOUR RESULT IS GONNA BE REALLY ACCURANTE NO?
02:42:12 <Fiora> so you can practically DDOS a system
02:42:15 <Fiora> by feeding it bad floats
02:42:34 <Bike> that sounds about as hilarious an attack as redos
02:42:34 <madbr> and it's really easy to get denormals in sound
02:42:46 <madbr> send a sound into a reverb then send silence
02:43:13 <madbr> yay your values are now decaying towards the ones that will make your cpu blow up
02:43:40 <madbr> same for filters (except faster)
02:43:47 <Fiora> does DAZ solve that problem?
02:43:47 <Jafet> "Broadwell will introduce some ISA extensions: * ADOX/ADCX for arbitrary precision integer operations"
02:44:10 <Bike> Jafet: yeah that looked neat
02:44:19 -!- TeruFSX has quit (Ping timeout: 248 seconds).
02:44:43 <Jafet> Yeah umm let's speed up addition??
02:44:53 <Fiora> Jafet: it's a flag problem
02:45:14 <madbr> flags registers kinda suck tbh
02:45:16 <Jafet> Doesn't intel already have fused multiply-add
02:45:22 <Fiora> flag dependencies basically forced really latency-bottlenecked situations
02:45:30 <Bike> madbr: what would you prefer?
02:45:34 <Fiora> because the instructino you want to do overwrites the flag you needed
02:45:45 <madbr> bike: no flag registers
02:45:49 <Fiora> so I think the BMI2 instructions (the mulx, shlx, adcx, etc) are to avoid that
02:46:12 <Fiora> I think ARM lets you choose whether or not to set the flags register with certain instructions, which also avoids most of the problem?
02:46:17 <Jafet> Ok not on integers
02:46:22 <madbr> like a cmp+branch instruction for instance
02:46:33 <Bike> madbr: so like you just have je x,y,address instead?
02:46:50 <Fiora> the flags here are for carry and stuff though :<
02:47:15 <Fiora> I don't think there's a very pretty way to multiply/add big numbers without flaggy-like things
02:47:27 <Fiora> for the carry bits
02:47:44 <Bike> well carry lookahead seems kind of insane anyway
02:47:59 <madbr> fiora: you can compare the result of the addition with the previous value (unsigned)
02:48:12 <madbr> if the result is smaller, there has been an overflow
02:48:28 <Bike> you could dump into double the word size, like MIPS in HI and LO
02:48:29 <Fiora> that's probably going to be a lot of extra instructions though...
02:48:41 <Bike> and then just compare HI with zero
02:48:46 <madbr> fiora: well, it's one extra jump compared to adc
02:48:54 <Fiora> oh geez don't make it a jump
02:49:06 <Fiora> that jump would be completely unpredictable ^^;
02:49:20 <Fiora> since carry bits are probably pretty random
02:49:41 <madbr> I think arm64 did it with a conditional increment/add of some kind
02:50:00 <Fiora> I don't think arm has the issue at all though, because it has flag-setting and non-flag-setting versions of instructions
02:50:05 <Fiora> so it doesn't run into the same problem x86 has?
02:50:06 <Bike> imo we just need a processor without arithmetic
02:50:11 <Bike> obviously it's impossible to do right
02:50:25 <Fiora> Bike: so i finally have an excuse to link you https://en.wikipedia.org/wiki/Transport_triggered_architecture
02:50:37 <Fiora> it's an OISC with nothing but moves
02:50:38 <Bike> are those still used at all?
02:50:57 <Bike> i've never really thought it counted as oisc
02:51:11 <madbr> inc_if_smaller rh, rl, al
02:51:40 <Jafet> Bike: an instruction can do anything
02:51:42 <madbr> ok that's one instruction longer than with adc
02:52:39 <Fiora> probably on ARM it'd be something like, add, add, cmp, conditional inc?
02:52:41 <Bike> Jafet: machine where the one instruction runs the accumulator register as a turing machine
02:52:42 <madbr> also if you add a SIMD unit later on, that's usually a good place to do 64bit math
02:52:51 <madbr> fiora: arm has adc
02:53:01 <Fiora> oh! so just, adds + adc?
02:53:22 <madbr> or you can do it in SIMD/NEON
02:53:32 <Fiora> does neon have 64-bit math like that?
02:53:33 <madbr> in which case you can do it on 2 ops at the same time
02:53:50 <Jafet> Bike: ok but you need to make the register width large enough to fit a UTM
02:53:54 <madbr> fiora: yeah, I'm not sure what use it has but yeah it has that
02:54:08 <Bike> Jafet: 64 bits are enough for anybody.
02:54:13 <Fiora> so far I think NEON only has 64-bit execution units though so it might not be faster :<
02:54:14 <Jafet> (This makes the proof of turing completeness easier)
02:54:18 <Bike> That's like, at least a billion machines.
02:54:33 <madbr> fiora: actually it depends on the instructions
02:54:56 <madbr> fiora: on some instructions 128-bit isn't faster (floating point in particular)
02:55:11 <Fiora> I remember hearing even a15 still had 64-bit units but I might be totally wrong since I don't really know much about this
02:55:22 <madbr> fiora: but I'm pretty sure integer addition is 128bit so you can do it in "1 cycle"
02:55:33 <Fiora> it doesn't have a 64-bit multiply it looks like :/
02:55:48 <madbr> ha no multiply is faster if you use smaller ops
02:55:57 <Fiora> I meant, no 64-bit datatype
02:56:08 <madbr> but on the a8 you want to do everything in 16 bits
02:56:09 <zzo38> Do you know if FPGA can be used with through-hole?
02:56:21 <madbr> fiora: a multiplier that large would probably never be fast
02:57:01 <Fiora> I think the ivy bridge 64x64->128 multiplier takes 3 cycles?
02:57:18 <madbr> even the timings for 32x32 multiplication would be kinda blah
02:57:27 * Bike imagines a processor with Knuth automata to make multiplication O(1)
02:57:32 <Fiora> yeah, 1 cycle throughput
02:57:43 <madbr> fiora : that sounds way too low
02:58:03 <madbr> fiora : actually on recent processors, wide multiplications tend to have long latencies
02:58:17 <Fiora> I have no idea how they got it so fast
02:58:34 <madbr> due to the short pipelines and ultra short propagation delays
02:58:56 <madbr> haven't seen 2 cycle latency addition yet but I bet they'll eventually do that
02:59:16 <Fiora> The bulldozer is 6 cycle latency, 4 cycle inverse throughput for the same, I think, if I'm reading this right
02:59:32 <madbr> 64x64 multiply isn't very useful
02:59:52 <Fiora> I think it's useful for arbitrary precision multiplies?
03:00:00 <zzo38> Do you know if FPGA programming software will run on a VM image which has CentOS?
03:00:13 <Fiora> huh. the K10 was 4/2, but the high 64 bits had an extra cycle of latency
03:00:35 -!- GOMADWarrior has quit (Ping timeout: 244 seconds).
03:01:15 <madbr> fiora : I don't think arbitrary precision code is very useful
03:01:23 <Fiora> I thought that was the topic though... :<
03:01:29 <Fiora> since that's what the adox/adcx were for or something
03:01:39 <madbr> maybe in cryptography and mathematics applications
03:02:01 <madbr> but other than that, 32bits solves like everything
03:02:11 <madbr> except >4gb memory
03:02:32 <Jafet> Multiplication is asymptotically log(n)
03:02:48 <Bike> schonhage-strassen in hardware let's do it
03:03:01 <Jafet> You just need to use about n^2 of circuit area or something
03:03:39 <madbr> I'd rather have them use more circuit area for 32bit float :D
03:04:52 <Jafet> 32bit float isn't very useful
03:05:33 <madbr> audio processing these days is essentially a mountain of 32bit float dude
03:05:41 * Fiora wants more circuit area for 16-bit int? <.<
03:05:54 <Bike> what do you use 16 bit int for
03:05:54 <madbr> the rest of the pipeline is only there for feeding in 32bit floats
03:06:23 <Fiora> Bike: it's like, a really good size for stuff where you don't need much range or precision
03:06:34 <madbr> same for 3d games, they're a mountain of floats
03:06:49 <madbr> fiora: why not just use 32bit registers for the 16bit ints?
03:07:33 <Jafet> Why not use 64bit floats for the 32bit floats
03:08:01 <madbr> jafet : that's what the fpu does (except using 80 bits)
03:08:45 <Fiora> I think that's only x87...
03:09:18 <madbr> 16 bit packed int makes sense for SIMD units yeah
03:09:33 <madbr> and video game audio in particular
03:10:03 <Bike> wow i wonder what madbr's day job is
03:10:05 <Fiora> it's super useful for image stuff, like for resizing an image with 8-bit pixels, 16-bit intermediates are sorta good enough
03:11:44 <Fiora> probably something incredibly cool <.<
03:12:04 <Bike> whenever i think of audio processing i just think of analog ones though, which is silly
03:12:41 <madbr> you mean like analog opamp based amplifiers?
03:13:01 <Jafet> That's okay, everyone knows analog sounds better
03:13:12 <Bike> probably because the only digital audio things i've fucked with consciously imitated synthesizer programming
03:13:33 <madbr> jafet : ahahahahahah
03:13:51 <Bike> you know madbr i think jafet might be interested in your goat
03:15:47 <madbr> but yeah tbh the basic audio processing is essentially emulating a digital sampler
03:16:09 <madbr> it's like the perfect thing for generating audio off a CPU
03:16:48 -!- sebbu has quit (Ping timeout: 272 seconds).
03:37:30 -!- Phantom_Hoover has quit (Remote host closed the connection).
03:49:19 -!- Frooxius has quit (Ping timeout: 260 seconds).
03:59:31 -!- Arc_Koen has quit (Quit: Arc_Koen).
04:13:44 <zzo38> How difficult is it to port SDL to other computers?
04:15:39 -!- Bike has quit (Quit: Reconnecting).
04:15:43 <zzo38> Furthermore, how do you deal with it if some of the keys have symbols that are not available in Unicode?
04:15:43 <zzo38> Or even control keys that are not available in SDL?
04:15:46 -!- Bike has joined.
04:17:34 -!- TheColonial has quit (Quit: leaving).
04:37:12 -!- oerjan has quit (Quit: leaving).
04:40:58 -!- muqayil has changed nick to upgrayedddd.
05:03:17 -!- azaq23 has joined.
05:03:25 -!- azaq23 has quit (Max SendQ exceeded).
05:03:50 -!- azaq23 has joined.
05:26:49 <Sgeo> Dear Chrome. When I google LLAP, I do not expect you to autocomplete it to llapingachos
05:27:08 <Sgeo> Worst part is it didn't offer a link for what I originally typed
05:27:27 <Sgeo> Because the autocompletion was in the address bar, not Google's decision after seeing the original
05:39:55 -!- md_5 has quit (Quit: ZNC - http://znc.in).
05:40:53 <zzo38> Is it possible to change the settings to fix all of those things?
05:41:25 -!- md_5 has joined.
05:49:50 -!- md_5 has quit (Quit: ZNC - http://znc.in).
05:51:12 -!- md_5 has joined.
06:13:34 -!- upgrayedddd has changed nick to abumirqaan.
06:19:23 -!- TeruFSX has joined.
06:22:55 <kmc> why can't i find goddamn mersenne twister test vectors
06:26:14 <zzo38> Why do you need a mersenne twister test vectors?
06:26:22 <zzo38> Maybe Wikipedia has some?
06:30:22 <kmc> because i have implemented mersenne twister and want to make sure i've done so correctly
06:30:46 <kmc> i have found some files on the website, now to verify that they implement the version of the algorithm that i implemented
06:56:47 <elliott> what'd you implement it for
06:57:05 <Bike> Because fuck that law of cryptography, man.
06:57:32 <Bike> The one that says you shouldn't implement crypto things yourself because you'll fuck it up.
06:57:35 <shachaf> I don't think the law applies when you're trying to break other people's bad cryptography.
06:57:51 <Bike> Ooh is that what kmc's doing?
06:59:05 <Bike> I guess it would fit with his usual predelictions.
06:59:21 <Bike> Either that or maybe he just wanted to Mersenne it up. Nuthin wrong with that.
06:59:44 <shachaf> Is there any situation wher eyou want to use a mersenne twister in cryptography, anyway?
07:04:04 -!- azaq23 has quit (Read error: Operation timed out).
07:04:17 <elliott> kind of hate the mythology around crypto
07:04:30 <elliott> though hate people who do their own crypto and refuse to admit they don't know enough to more
07:04:55 <Bike> Don't enough to more?
07:09:05 <elliott> of course the lisper can't figure out where the parens go
07:09:12 <elliott> (imagine I spat after saying "lisper". possibly before as well)
07:09:38 <Bike> I seriously can't understand you though.
07:10:06 <Bike> Oh, wait. "Though, hate people who do their own crypto (and refuse to admit they don't know enough to) more"
07:17:48 <shachaf> wait wait wait Bike is a lisper?
07:18:05 <Bike> yeah, elliott already drowned me in his saliva, so now I'm a ghost.
07:18:39 <Bike> It happens when your trachea is interrupted for long periods. Sorry?
07:23:40 -!- azaq23 has joined.
07:34:41 -!- ogrom has joined.
07:36:35 -!- madbr has quit (Quit: Radiateur).
07:37:44 -!- TeruFSX has quit (Ping timeout: 256 seconds).
07:43:32 -!- epicmonkey has joined.
08:05:48 -!- kallisti has joined.
08:05:48 -!- kallisti has quit (Changing host).
08:05:48 -!- kallisti has joined.
08:09:09 -!- ogrom has quit (Quit: Left).
08:09:15 -!- epicmonkey has quit (Read error: Operation timed out).
08:31:29 -!- fftw has changed nick to Euphemism.
08:35:53 -!- epicmonkey has joined.
08:40:16 <zzo38> What is the resonant frequency of a magic spell to make anti-magic field in Dungeons&Dragons game?
08:41:02 <kmc> Bike: i'm doing the puzzles where you're told to implement bad crypto and then break it
08:41:24 <kmc> if you want to do them, email sean@matasano.com
08:41:39 <kmc> there's not, like, a web page about it or anything
08:41:51 <kmc> you email him and he gives you some crypto puzzles
08:41:58 <Bike> yeah, i saw earlier.
08:42:46 <elliott> does everyone get the same ones
08:42:58 <kmc> TIL that you can reconstruct the internal state of a mersenne twister from 634 consecutive outputs
08:43:06 <Bike> That's the last puzzle, elliott.
08:43:10 <kmc> and thus predict every output it's going to give after that
08:43:14 <kmc> elliott: i think so, don't really know
08:43:40 <Bike> kmc: Anything you can do with nonconsecutive (if you don't know how many times it fired between your samples)?
08:44:11 <shachaf> kmc: Oh, I'd heard that before.
08:44:14 <kmc> well you can guess how many times
08:44:25 <Fiora> kmc: that's because it's not a secure PRNG, right?
08:44:25 <kmc> there probably are fancy things you can do as well
08:44:27 <kmc> but i don't know them
08:44:30 * shachaf should finish the last one of this set...
08:44:39 <Bike> Fiora: yeah it doesn't have a great distribution either iirc
08:44:40 <kmc> Fiora: i would put the causality the other direction but yes
08:44:57 <shachaf> Well, if the design goal had been to make it a secure PRNG, then it wouldn't have this property.
08:45:11 <shachaf> Causality is a lie anyway.
08:45:32 <Bike> if then -> causality, nice
08:45:35 <shachaf> Do you know things about modal logic?
08:45:48 <kmc> yeah, the authors are quite clear on the fact that it shouldn't be used for crypto as-is
08:45:51 <kmc> but people do it
08:46:09 <kmc> or rather, people use rand() for crypto and don't know or care what algo it is, and it's often MT
08:46:10 <Fiora> aren't there simpler secure PRNGs?
08:46:25 <Bike> reminds me of one of knuth's exercises
08:46:25 <Fiora> that makes sense, just blindly using rand()
08:46:42 <Bike> which was just "look up how your installation's supposed CSPRNG and be horrified"
08:47:21 <Bike> Also: I am aware that modal logic exists.
08:47:22 <Fiora> would that be /dev/urandom?
08:47:47 <kmc> shachaf: i know a small bit about linear temporal logic
08:48:06 <shachaf> I went to a talk about model-checking once!
08:48:36 <shachaf> I learned a little bit about LTL and that other one.
08:51:50 -!- epicmonkey has quit (Ping timeout: 272 seconds).
08:56:47 <kmc> what's CTL for
09:04:28 <kmc> i'm not sure what security properties exactly would be meant by "secure PRNG"
09:04:46 <kmc> but maybe it would be the same thing as a stream cipher
09:05:29 <kmc> got to sleep though, ttyl all
09:05:44 <Fiora> http://en.wikipedia.org/wiki/Cryptographically_secure_pseudorandom_number_generator ?
09:06:19 <Bike> how about "can't use it as part of an attack" as a property :)
09:07:57 <Bike> or you could just use an lcg. live fast die hard
09:08:38 <Bike> http://en.wikipedia.org/wiki/File:Lcg_3d.gif ~
09:12:28 <Fiora> http://en.wikipedia.org/wiki/RANDU "It is widely considered to be one of the most ill-conceived random number generators designed. "
09:12:52 <Bike> knuth's intro in taocp was funny
09:13:07 <Bike> he started off with a generator that consisted of him throwing arbitrarily picked operations together, as an undergrad
09:13:10 <Bike> "that's random right"
09:15:18 <Fiora> I like that subtlety that "rand()%1000" is not actually an unbiased way to pick a number in [0,1000)
09:15:38 -!- Nisstyre-laptop has quit (Quit: Leaving).
09:15:43 <Bike> "We guarantee that each number is random individually, but we don't guarantee that more than one of them is random."
09:16:08 <Bike> fiora, well, clearly you should rejigger your application to make that 1024!
09:16:58 <shachaf> Bike's solution sounds good to me.
09:17:05 <Fiora> the other day I was using rand() to do a memory timing benchmark thing (like, relaly simple, just randomly accessing elements in a giant array and timing each one)
09:17:08 <Bike> or just alter rand_max to be a power of 1000
09:17:20 <shachaf> Alternative: Generate a random number < 1000000, and then take that % 1000
09:17:36 <Fiora> I ran it on the hardware I was testing and it looked right, but then I ran it on my computer and it didn't seem to ever hit RAM
09:17:51 <Fiora> on the thing I was testing, RAND_MAX was INT_MAX or so
09:17:55 <Fiora> and on my computer it was 65535 >.>
09:18:08 <shachaf> Bike: How do you talk about unions categorically?
09:18:37 <Bike> union is a morphism on Set, I suppose?
09:19:56 <shachaf> When you talk about cartesian products, you say that for any two objects A and B there's an object AxB such that blah blah.
09:20:43 <Bike> Is that categorical?
09:21:00 <Bike> sounds pretty standard, in that i can fill in the "blah blah"
09:21:02 <shachaf> Most of CT sounds that way.
09:21:12 <Bike> if it was category theory i wouldn't understand it, you see.
09:21:45 <zzo38> Well, they have certain properties such as associativity.
09:21:57 <shachaf> Well, the definition looks like this:
09:22:20 <shachaf> http://thefirstscience.org/images/Figure%20B4%20Arrow%20Theoreci%20Represnation%20of%20Product.png
09:22:30 <Bike> Now we're talkin'.
09:22:44 <shachaf> In particular the diagram on the right.
09:23:22 <shachaf> Given two sets, A and B, you have a product (AxB,pi_1 : AxB -> A,pi_2 : AxB -> B)
09:23:34 <shachaf> I.e. the object, and an arrow from the object to each of A and B
09:24:15 <shachaf> There are lots of things that behave that way, though.
09:24:15 <Bike> is this a pedagogy thing, is that why you're doing this
09:24:28 <shachaf> You don't want to be pedagogued?
09:24:39 <elliott> Bike: shachaf just asks people questions and then fills in all the details until they know as much as him, at which point they are obligated to figure out the answer.
09:24:47 <elliott> have fun learning about category theory
09:24:50 <Bike> well, i do, i'd jst appreciate some warning, like elliott just gave.
09:25:03 <shachaf> elliott: Hey, you do it too!
09:25:05 <Bike> Anyway, yes, coproduct, generalized abstract nonsense as they say.
09:25:07 <shachaf> I learned about all this nonsense because of you.
09:25:18 <Bike> And you call the pies projections.
09:25:53 <elliott> I think Bike is optimally qualified here.
09:25:57 <shachaf> elliott: Actually what you do is ask a question and then keep bugging me about it until I know all the details.
09:26:08 <elliott> my approach is more effective I think
09:26:10 <Bike> But I'm such a troglodyte I think of this in such horrible ways as tagged unions.
09:26:27 <shachaf> Bike: Nothing wrong with that?
09:26:37 <Bike> P. sure everything I think is wrong.
09:26:53 <shachaf> What do you think of monoids?
09:27:36 <shachaf> (psst the correct answer is easy.)
09:28:05 <shachaf> Bike: What do you think of monoids?
09:28:21 <Bike> You already asked that.
09:28:45 <shachaf> Sure, but now you know the answer.
09:29:03 <shachaf> I'll take it as "Bike doesn't want to learn about products".
09:29:10 <shachaf> Fiora wnats to learn about products?
09:30:43 <Bike> It's just, it's time I should be sleeping, I'm supposed to be reading a paper about throwing things, and then suddenly question phrased in such a way that I can't tell if I'm supposed to actually explain it to someone who doesn't know.
09:30:59 <shachaf> Well, I actually have no idea.
09:31:19 <Bike> About unions? I was guessing that they're not that useful to category theory really
09:32:26 <shachaf> I was thinking maybe not but surely there's a way to talk about them somehow?
09:33:33 <Bike> that's why I guessed you wanted to talk about set unions in a category theory way, rather than try to generalize set unions appropriately as that didn't seem possible
09:35:30 <shachaf> Bike: Oh, is it just equalizers and coequalizers or something?
09:36:05 <monqy> did somebody say category theory
09:36:56 <shachaf> no you can go back to sleep
09:37:16 <shachaf> monqy: So how do you talk about unions in a categorical way?
09:38:30 <monqy> something about the colimit of some functor omega->Set?????????
09:38:59 <monqy> i guess thats a union of infinite inclusions tho hm
09:39:08 <monqy> yeah you know the ordinal number!!!!
09:39:22 <monqy> we pretend it's a category by writing out all the smaller ordinals and connecting them with arrows
09:42:58 <shachaf> Oh, it is just a coëqualizer.
09:44:40 <monqy> remember how i said i don't know ct yet!!!!
09:44:41 -!- xandgate has joined.
09:45:23 <Bike> it's looking increasingly less likely that category theory is a thing that is knowable
09:45:35 <shachaf> You said you would read Mac Lane?
09:45:40 <monqy> according to ncatlab you want a "coherent category", whatever that is
09:45:46 <shachaf> monqy: Also what's with all the question marks and exclamation marks and things?
09:45:55 <Bike> an unreachable platonic ideal of unreachable platonic ideals
09:46:07 <monqy> im slowly reading mac lane whenever im waiting for things to happen
09:46:17 <monqy> im hoping it seeps in before i forget it
09:46:35 <shachaf> Bike: A terminal object in the category of platonic ideals?
09:46:38 <monqy> today i learned all about colimits and limits
09:46:41 <Bike> "Unions of completely arbitrary sets make sense only in material set theory, where their existence is guaranteed by the axiom of union. In structural set theory, unions of arbitrary sets can generally be replaced by disjoint unions." yeah that would make sense
09:46:42 <shachaf> Or maybe in the category of cones or something.
09:46:57 <shachaf> monqy: Those are pretty great, right?
09:47:23 <Bike> too bad structuralist set theory has nothing to do with post-structuralism
09:47:39 <Bike> if only we had a lacan for category theory
09:48:50 <monqy> anyway unions are dumb and i'll concern myself no further with them
09:49:07 <shachaf> The pullback of A -> AunionB <- B is AintersectB?
09:49:18 <shachaf> With the obvious functions.
09:49:33 <shachaf> And the pushout of A <- AintersectB -> B is AunionB?
09:52:33 -!- Bike has quit (Quit: ockeghem).
09:53:14 <shachaf> monqy: Categories are the dumb one.
09:53:56 <monqy> Q.when did you go off the deep end
09:54:07 <monqy> “metaphorically speaking„
09:54:49 <shachaf> monqy: Which end is the deep one?
09:55:03 <shachaf> monqy: Category theory is so pointless even mathematicians don't like it.
09:55:39 <elliott> did you know the wiles proof of FLT involvted categories???? i learned this yeterday on wikipedaije
09:55:53 <shachaf> Involuted categories? That soudns scary.
09:56:53 <shachaf> Categories: Galois theoryFermat's last theorem
09:59:57 <shachaf> monqy: should i read _master and margarita_
10:02:05 <shachaf> i thought maybe you heard of it
10:03:20 <monqy> i hear it's a fantastic farce mysticism romance satire
10:04:52 <shachaf> well for the purpose of this channel
10:06:01 <shachaf> are you saying you aren't even russian………
10:07:24 <monqy> p.s. what was that ctcp about
10:09:12 <monqy> alt. what's this russian stuff about. why would i be russian.
10:09:21 <shachaf> because then you would know about this book
10:11:28 -!- xandgate has left.
10:11:39 <shachaf> also lots of people are russian?
10:12:26 <monqy> though i also hear lots of people aren't russian
10:12:31 <monqy> can you believe it
10:14:20 <shachaf> monqy: have you considered just becoming russian
10:16:27 <monqy> i don't think it's so simple
10:22:26 -!- azaq23 has quit (Quit: Leaving.).
10:26:30 <mroman_> https://code.google.com/p/zopfli/
10:27:00 <zzo38> I don't like the types in C to have default signed/unsigned, and I think it would be better that if it is not specified, that each operation might be signed or unsigned depending on the computer and on the optimization.
10:27:08 <mroman_> How hard can it be to type umlauts.
10:28:00 <zzo38> It depends on your computer.
10:36:41 -!- ogrom has joined.
10:44:05 -!- nooga has joined.
10:45:45 -!- sebbu has joined.
10:45:45 -!- sebbu has quit (Changing host).
10:45:45 -!- sebbu has joined.
10:56:37 -!- wareya has quit (Read error: Connection reset by peer).
10:57:18 -!- nooodl has joined.
10:57:21 -!- wareya has joined.
11:28:47 -!- Phantom_Hoover has joined.
11:41:10 -!- nooodl has quit (Read error: Connection reset by peer).
11:41:10 -!- carado has joined.
11:43:51 -!- aloril has quit (Ping timeout: 256 seconds).
11:45:31 -!- monqy has quit (Quit: hello).
11:50:32 -!- DHeadshot has joined.
11:58:04 -!- aloril has joined.
12:18:06 -!- nooodl has joined.
12:23:57 <lambdabot> http://hackage.haskell.org/trac/ghc/newticket?type=bug
12:24:39 <shachaf> I guess you're not allowed to receive hugs.
12:24:48 <HackEgo> nooga hate OS X. NOOGA SMASH.
12:24:57 <HackEgo> noohga hahte OhS X. NOOhGA SMAhSH.
12:25:01 <HackEgo> #!/bin/sh \ topic=$(echo "$1" | tr A-Z a-z | sed "s/ *$//") \ topic1=$(echo "$topic" | sed "s/s$//") \ cd wisdom \ if [ \( "$topic" = "ngevd" \) -a \( -e ngevd \) ]; \ then cat /dev/urandom; \ elif [ -e "$topic" ]; \ then cat "$topic"; \ elif [ -e "$topic1" ]; \ then cat "$topic1"; \ else echo "$1? ¯\(°_o)/¯"; exit 1;
12:25:56 <olsner> `learn nooga hate OS X. NOOGA SMASH. Hug not allowed.
12:26:22 <HackEgo> shachahf sprø sohm selleri and cosplayhs Nepeta Leijohn ohn weekends.
12:28:45 <olsner> apparently sprø som selleri is a book about a psychologist making jokes (or whatever tuller means): http://www.pax.no/index.php?ID=Bok&counter=1103
12:29:17 <shachaf> @ask oerjan what is sprø som selleri
12:32:07 <olsner> oh, part of the book deals with archaic psychiatric treatments, like veal blood transfusions and scabies inplantation
12:32:11 -!- Euphemism has changed nick to fftw.
12:33:41 <olsner> oh, and centrifugueing or however you say that (centrifusion?)
12:37:45 <elliott> Anomaly: Uncaught exception Invalid_argument("List.combine"). Please report.
12:42:02 <elliott> Error: Found a matching with no clauses on a term unknown to have an empty
12:45:08 -!- Arc_Koen has joined.
13:35:34 -!- HackEgo has quit (Ping timeout: 244 seconds).
13:36:41 -!- HackEgo has joined.
13:40:25 -!- mekeor has joined.
13:47:33 -!- ogrom has quit (Quit: Left).
13:49:59 -!- carado has quit (Quit: Leaving).
13:52:18 -!- carado has joined.
13:54:00 <Sgeo> "In fact, when compared to Garamond, which wasnt originally designed for the screen, Comic Sans fares quite well in terms of readability."
13:54:06 <Sgeo> (Without anti-aliasing)
13:54:10 <Sgeo> http://kadavy.net/blog/posts/why-you-hate-comic-sans/
13:55:00 -!- cantcode2 has quit (Quit: ragequit).
14:03:12 <Phantom_Hoover> "Without ant-aliasing, fonts look jagged as if they were made of LEGOS."
14:15:41 <Lumpio-> That's just silly, you can do anti-aliasing with legos
14:22:18 -!- Frooxius has joined.
14:30:44 -!- ogrom has joined.
14:37:17 <Sgeo> I'm going to be AFK most of the day.
14:37:22 <Sgeo> Heading to a friend's house.
14:47:41 <Sgeo> Phantom_Hoover, have a Worlds mirror http://img843.imageshack.us/img843/2478/67591116.png
14:48:36 <Sgeo> Apparently Phantom_Hoover is scared of mirrors.
14:49:46 <Lumpio-> This... reminds me of Second Life
14:49:59 <Sgeo> Second Life doesn't have mirrors :(
14:50:06 <Lumpio-> "Club" at the bottom right is for wild deviant sex
14:50:15 <Lumpio-> And "Animal House" is also for that, except it's furries
14:50:50 <Sgeo> Actually, Worlds mirrors are portals that have horizonal flipped.
14:50:54 <Lumpio-> Those things are always weird
14:51:07 <Sgeo> Worlds has portals. This is an awesome thing.
14:51:35 <Lumpio-> Speaking of which I was supposed to do portal rendering in webgl
14:51:57 <Lumpio-> But I got bored when I couldn't find any good scenes to try it on. You sort of need an interesting scene with enough details and texturing to make it look interesting.
14:52:01 <Lumpio-> And I suck at 3D modeling badly.
14:52:04 <Sgeo> Also a thin transparent wall in front to prevent people from accidentally walking into the mirror and ending up upside-down.
14:52:13 <Sgeo> Phantom_Hoover, you may now resume being terrified of Worlds mirrors.
14:53:02 <Phantom_Hoover> something that irks me: i'm not sure all those impossible geometries you can build with portals are actually non-euclidean like everyone says
14:55:24 <Sgeo> Here's a thing with a better body modification panel: http://25.media.tumblr.com/ea771a7c7328a138ded50bdca0fadd89/tumblr_miwphxRb4L1ruytnho1_1280.png
14:55:34 -!- GOMADWarrior has joined.
14:55:41 <Lumpio-> How do you define euclidean geometry again
14:55:52 <Lumpio-> http://en.wikipedia.org/wiki/Euclidean_geometry#Axioms anything that satisfies these?
14:56:35 <Sgeo> The thing that's on the wall that you can partially see in the mirror reads
14:56:43 <Sgeo> "The Gallery of Metamorphics"
14:58:26 <Lumpio-> http://virkkunen.net/b/portals.png well here's a thing that doesn't satisfy the "parallel postulate"
14:58:49 <Lumpio-> Those two black lines should meet each other because they're not parallel, yet they never do due to those two portals.
15:00:41 <Sgeo> If you're in a 3d world and can see yourself because of portals, is the world still possibly Euclidean, or would it violate a postulate
15:01:08 <Sgeo> There's a maze like that below Animal House
15:01:14 <Phantom_Hoover> <Lumpio-> http://en.wikipedia.org/wiki/Euclidean_geometry#Axioms anything that satisfies these?
15:01:28 <Phantom_Hoover> yeah, but it gets more complicated once you start working with arbitrary spaces
15:01:39 <Lumpio-> Sgeo: Can you go into a portal from the back? And then emerge from the back of the other portal?
15:01:59 <Lumpio-> Phantom_Hoover: Well the first axiom is pretty solid in any dimension right?
15:02:07 <Lumpio-> The ability to draw a straight line from any point to any point
15:02:13 <Phantom_Hoover> Lumpio-, the only thing you ever really care about is the parallel postulate
15:02:24 <Sgeo> Lumpio-, no, although you can have back-to-back portals that work like that, I think
15:03:11 <Lumpio-> Anyways we'd have to define what happens when an infinite ray hits the back of a portal then
15:03:26 <Phantom_Hoover> but there's also more complicated notions of 'locally euclidean' spaces and such
15:03:56 <Lumpio-> If it goes through and emerges from the back of the other portal, or magically "stops" there, in both cases you can violate the first axiom
15:04:02 <Lumpio-> i.e. the ability to draw a straight line between two points
15:04:22 <Phantom_Hoover> and essentially in that diagram you drew, at almost all points you can take a set around them where geometry is euclidean
15:05:07 <Lumpio-> Well you didn't say locally euclidean
15:05:31 <Lumpio-> If we just take any subspace that doesn't include the portals then obviously it's euclidean because that's how we defined (assumed?) it to be when not dealing with portals
15:05:44 <Sgeo> Lumpio-, lines that enter the back of a portal leave at the same portal, they ignore the portal
15:05:49 <Sgeo> Portals in Worlds are one-sided
15:06:09 <Sgeo> As though it wasn't there
15:08:05 <Lumpio-> http://virkkunen.net/b/portals2.png here
15:08:23 <Lumpio-> You can see your back via the portals
15:08:29 <Lumpio-> Now try to draw a straight line from A to B
15:09:07 <Sgeo> I can draw a line from B to A but not A to B.... how does that even make sense.
15:09:26 <Sgeo> I think directionaless of portals confuses the lack of direction of lines
15:10:04 <Lumpio-> I'm pretty sure Euclid would be turning in his grave if we told him that a space where A to B is not the same as B to A is Euclidean
15:10:23 <Phantom_Hoover> Sgeo, i think essentially what you have done is made it stop being a metric space
15:10:33 <Lumpio-> But of course you can construct something that's Euclidean enough even with portals
15:10:35 <Lumpio-> http://virkkunen.net/b/portals3.png
15:10:47 <Lumpio-> There. The most useless portal ever.
15:11:11 <Lumpio-> Disclaimer for all above: I don't know math terminology properly
15:12:26 <Phantom_Hoover> a metric space is just one with a sufficiently sensible notion of distance between two points
15:12:26 <Sgeo> I do think in most Worlds rooms that portals are set against walls
15:12:44 <Phantom_Hoover> one of the conditions of sensibility is that d(a,b) = d(b,a)
15:13:44 <Sgeo> In that maze, a one-way hallway wouldn't be made with the one-directionality of portals
15:14:03 <Sgeo> You'd hook up portal A in room A to portal B in room B, but portal B in room B would not be hooked up to portal A
15:15:00 <Phantom_Hoover> what do you see if you walk through a portal backwards
15:15:19 <Sgeo> You go to where that portal is hooked up to
15:15:24 <Sgeo> Just walking backwards into it
15:15:54 <Sgeo> Phantom_Hoover, you should try one of the spy rooms in Worlds
15:16:12 <Sgeo> They're worlds that are like the normal worlds, except a spy room where you can see people and they can't see you
15:16:25 <Sgeo> Walk into the regular area and you won't be able to get back into the spy room
15:17:39 <Sgeo> (without teleporting back in, I mean)
15:17:50 <Sgeo> I should see if Worlds works on WINE
15:17:58 <Sgeo> But not today. I have a friend to visit.
15:36:46 -!- carado has quit (Ping timeout: 246 seconds).
16:51:16 -!- DHeadshot has quit (Ping timeout: 272 seconds).
16:51:30 -!- DHeadshot has joined.
17:04:53 -!- Bike has joined.
17:16:21 -!- Taneb has joined.
17:18:29 -!- epicmonkey has joined.
17:19:46 -!- cantcode has joined.
17:55:34 -!- ogrom has quit (Quit: Left).
18:03:12 -!- sebbu has quit (Ping timeout: 244 seconds).
18:11:49 -!- madbr has joined.
18:12:00 <madbr> anyone here does verilog?
18:15:51 -!- sebbu has joined.
18:15:52 -!- sebbu has quit (Changing host).
18:15:52 -!- sebbu has joined.
18:45:05 <kmc> very high drug lover
18:45:45 <kmc> all i know about verilog / vhdl is the extreme level of copy-pasta
18:45:50 -!- cantcode has quit (Ping timeout: 260 seconds).
18:46:20 <madbr> playing around with a cpu design
18:47:18 <Fiora> I did a tiny bit of verilog but that was the class I dropped
18:47:38 -!- FreeFull has quit (Ping timeout: 244 seconds).
18:48:17 <madbr> trying to do something where a C compiler could vectorize more or less automatically
18:48:27 -!- FreeFull has joined.
18:48:50 -!- ogrom has joined.
18:51:58 <madbr> and the concept I have is what I'd call "staggered SIMD"
18:52:20 <madbr> which is somewhere between standard SIMD, superscalar and out of order...
18:53:02 <madbr> essentially instead of doing every SIMD slice operation on the same cycle, the first slice is done on cycle 1, second slice on cycle 2, etc...
18:53:15 <madbr> so that you can do stuff like real SIMD memory loads
18:53:37 <madbr> (since you do the load in unit 1 on first cycle, unit 2 on second cycle, etc...)
18:53:51 <madbr> also there are special feedback registers
18:53:55 <Fiora> isn't that like, the same thing as a pipelined CPU?
18:54:16 <madbr> yeah but it's pipelining different iterations of the same routine
18:54:37 <madbr> unit 1 is running iteration 1, unit 2 is running iteration 2...
18:54:41 <Fiora> I thought CPUs already do that...
18:55:04 <madbr> so actually it's like you're going though your loop 4 or 8 times at the same time
18:55:16 <Fiora> like the ARM instruction "load multiple" loads a bunch of registers (2 per cycle) from memory, and every cycle, another 2 are available for use
18:55:22 -!- TeruFSX has joined.
18:55:38 <madbr> fiora: yeah that's an example
18:56:46 <madbr> you'd have to space your memory load instructions tho
18:57:02 <Fiora> I don't think that does anything except save code size though...?
18:57:06 <madbr> suppose you're running 5 units at the same time
18:57:19 <madbr> [mem op] [alu] [alu] [alu] [alu] [mem op] [alu] [alu] [alu] [alu] [mem op] [alu] [alu] [alu] [alu] ...
18:58:16 <Fiora> I thought you already have to do that on in-order CPUs with multi-issue...
18:58:34 <madbr> they do that yeah but it's not the same kind of multi-issue
18:58:59 * Fiora doesn't understand the idea then, sorry
18:59:06 <madbr> OOO cpus fill the pipelines with different alu ops that are near by
18:59:25 <madbr> suppose you have a routine that's
18:59:58 <Fiora> I meant non-OOE things
19:00:13 <Fiora> like the atom has dual-issue but no OOE, so you have to write instructions in a way that they can pair
19:01:09 <madbr> jge r8, r9, loopend
19:01:24 <madbr> and you're using 4 units
19:01:46 <madbr> first cycle, units 2..4 are doing nops
19:01:57 <madbr> first unit does load r1, r8
19:02:08 <Fiora> yeah, that looks just like a normal pipelined CPU... I'm confused
19:02:24 <madbr> rirst unit does add r1, #14
19:02:38 <madbr> second unit does load r1, r8
19:02:52 <Fiora> but load r1, r8 has already been done...
19:03:07 <madbr> yeah but they're different copies of r1, r8
19:03:28 <Fiora> so like, this is a special case in which you have a bunch of loop iterations which are completely independent?
19:03:29 <madbr> essentially the second unit is already starting the second iteration of the loop
19:03:41 <Fiora> I think OOE already does that
19:03:56 <Fiora> like, modern CPUs with big reorder buffers can be running 15 iterations of a loop at a time
19:04:06 <madbr> OOE doesn't do that, it reorders nearby instructions
19:04:15 <Fiora> um... that's... not quite what it does...
19:04:34 <Fiora> it keeps dispatching instructions until one of the queues fills, preventing it from doing so, I think
19:04:41 <madbr> like, on a p2, eventually it's running multiple iterations at the same time yes
19:04:49 <madbr> it does it locally really
19:04:51 <Fiora> so like in one bit of code I wrote I have a loop that does two loads, an average, and a store
19:05:02 <Fiora> this loop manages to fill the entire load queue, so that's where it stops issuing
19:05:04 <madbr> like, if you had a 300 instruction routine
19:05:11 <madbr> your iterations wouldn't overlap
19:05:56 <Fiora> but if it had 300 instructions it could probably do something else in those 300 instructions...
19:06:15 <Fiora> I mean, if the iterations are independent, you can just write the code a bit differently to let OOE do things better? :<
19:06:36 <madbr> unrolling isn't very beneficial in OOO
19:06:58 <Fiora> it helps if the loop is super gigantic and has a long dependency chain and the OOE buffer doesn't reach into the next iteration, I think?
19:06:59 <madbr> SIMD (SSE, MMX, NEON, Altivec) kinda helps
19:07:05 <Fiora> like your 300 example...
19:07:21 <madbr> but your routine has to be very paralellizable and most of the time the compiler can't guess I think
19:07:37 <Fiora> soooo write the simd yourself silly :P
19:07:50 <madbr> on p2 unrolling only saves like the cmp and jmp
19:08:09 <Fiora> ... but unrolling is useful if it lets the execution unit get more parallelism...
19:08:31 <madbr> but it's really only worth it if your loop is really small
19:08:37 <madbr> like 8 instructions or less
19:08:56 <Fiora> but if it's like 8 instructions or less OOE can look ahead to the next iteration just fine o_O
19:09:01 <Fiora> so you don't need to unroll...
19:09:44 <madbr> no it's like the other way around
19:09:59 <madbr> in the 300 example you're less likely to get dependency chains
19:10:30 <Fiora> but.... you just said the iterations were independent...
19:11:05 <madbr> like, you'll have result to result dependency chains within the routine but in fact both the compiler and the CPU can analyze that stuff and reorder everything if there aren't too many memory accesses
19:11:54 <madbr> in a 8 instruction loop your writes and reads will be very close together and it will probably be hard to "prove" that the reads don't depend on the writes
19:12:21 <madbr> like you didn't overlap the read and write buffer addresses on purpose
19:12:44 <Fiora> the cpu does speculative loads and stores though
19:12:51 <Fiora> and if it turns out that things did overlap it will re-issue them
19:13:13 <madbr> I'd like to see how they do that :D
19:13:21 <madbr> afaik it's freakishly complex
19:13:34 <Phantom_Hoover> half an hour of relatively on-topic discussion and nobody's mentioned turing even once
19:13:35 <Taneb> What's the best way to bring a C program to a screaming halt
19:13:49 <Bike> turn off the computer
19:13:51 <Fiora> they do it just by issuing them in advance and then going and redoing it if they were wrong XD
19:14:02 <madbr> phantom : that's what you're trying to avoid
19:14:20 <madbr> turing is when your result influence each other tightly and you can do everything
19:14:25 <Fiora> not that I know, like, hardware details but
19:14:45 <Bike> these machines are only pushdown automata, Phantom_Hoover `-`
19:14:47 <madbr> but it breaks all optimizations since you can't reorder everything and everything has effects on everything else :D
19:15:26 <madbr> jumps and memory accesses are hell
19:15:52 <Fiora> yay the benchmark worked
19:15:53 <Fiora> for( int i = 0; i < 128; i++ )
19:15:53 <Fiora> sum += i * i * i * i * i * i;
19:16:11 <Fiora> this loop takes 770 cycles on my machine, so it's doing 1 multiply every cycle
19:16:15 <madbr> they prevent the C++ optimizer from doing its job
19:16:17 <Fiora> but each iteration is completely latency bound
19:16:26 <Fiora> so it has to be looking ahead at least 6 iterations in order to get that.
19:16:38 <Fiora> erm, at least 3 iterations, since imul is latency 3
19:17:08 <madbr> doesn't have memory loads inside the loop so the compiler can deal with it :D
19:17:18 <Fiora> okay, let's test that then :3
19:17:40 <madbr> you can save a mul actually
19:17:59 <madbr> for(int i=0; i<128; i++)
19:18:27 <Fiora> Yeah, I wasn't trying to optimize it XD
19:18:43 <madbr> I think gcc can do that optim but not other compilers actually
19:18:49 <Bike> madbr: er don't you need to multiply by temp
19:19:05 <madbr> bike: errr, yeah, sorry there
19:19:08 <Fiora> my gcc does a chain of 5 imuls
19:19:08 <Bike> oh, duh yeah, power decomposition
19:19:13 <Fiora> though it's an older one
19:19:27 <Bike> my personal favorite possibly-NP-complete-but-who-knows problem
19:19:28 <Fiora> okies! so, second test:
19:19:30 <Fiora> for( int i = 0; i < 128; i++ )
19:19:30 <Fiora> int in = input[i];
19:19:30 <Fiora> output[i] = in*in*in*in*in*in;
19:19:37 <Fiora> this takes 754 cycles.
19:19:54 <Fiora> if it was totally latency bound it'd take at least 5*3*128 cycles.
19:20:16 <madbr> { int i2 = i*i; int i4 = i2*i2; sum += i4*i2; }
19:20:21 <Fiora> hmm. lemme try it with random memory accesses >:3
19:21:02 <Bike> madbr: quick what's the optimal chain for i^15
19:21:02 <madbr> fiora : I think it's because the compiler can guess that input and output don't alias yes
19:21:42 <Fiora> the compiler doesn't unroll the loop though...
19:21:57 <madbr> fiora: might not be worth it
19:21:59 <Fiora> I mean, the compiler can't like, tell the cpu that they don't alias
19:22:03 <Fiora> the cpu has to figure it out on its own...
19:22:22 <madbr> ah, so speculative loads actually work? :D
19:22:42 <madbr> afaik ARMs don't have speculative loads
19:23:00 <Fiora> okay, even better....
19:23:05 <Fiora> for( int i = 0; i < 128; i++ )
19:23:05 <Fiora> int off1 = randnums[i];
19:23:05 <Fiora> int off2 = randnums[i+128];
19:23:05 <Fiora> int in = input[off1];
19:23:07 <Fiora> input[off2] = in*in*in*in*in*in;
19:23:12 <Fiora> this takes 782 cycles XD
19:23:21 <Fiora> (randums[] is an array of random numbers between 0 and 128)
19:23:55 <kmc> they really missed an opportunity to name the 64-bit ARM architecture "LEG"
19:24:11 <kmc> a la Thumb
19:24:14 <pikhq> Best way to bring a C program to a screaming halt? volatile int x = 0; for(volatile unsigned int i = 0; i < UINT_MAX; i++) { x = time(0) + rand(); }
19:24:23 <Fiora> ... oh my gosh I I only just got that joke
19:24:36 <kmc> see also: ELF and DWARF
19:24:39 <pikhq> Remember, volatile is your friend.
19:24:54 <Fiora> wow . I didn't get that one either
19:25:10 <Fiora> acronym naming people are horrible/wonderful
19:25:25 <kmc> Microprocessor without Interlocking Pipeline Stages
19:25:29 <kmc> you can't elide a "without"!!
19:25:53 <Fiora> I love that one, it's like, almost immediately after the original they put the interlocks back in
19:25:56 -!- Taneb has quit (Quit: Leaving).
19:26:32 <Fiora> madbr: oh, so like, I made it so all the randnums were zero (so it always aliased)
19:26:35 <Fiora> now it takes 1981 cycles
19:26:51 <Fiora> if all the randnums are [0,2), it takes 1649
19:26:54 <kmc> Fiora: oh, you mean to get rid of the load/store delay slots?
19:27:33 <Fiora> so basically the cpu is speculatively executing many iterations ahead, and it has to stop and wait if it turns out the loads and stores collided
19:27:34 <madbr> but yeah that's impressive
19:27:34 <kmc> are interlocks the same thing as bypasses
19:27:57 <Fiora> ummmm I was just reading about that the other day but didn't quite get it, let me see if I can find the wikipedia explantion
19:28:08 <Fiora> http://en.wikipedia.org/wiki/Classic_RISC_pipeline#Solution_B._Pipeline_Interlock
19:28:20 <madbr> they are the same thing in... reverse............ right?
19:28:38 <Fiora> interlocks delay the pipeline when data isn't ready for an instruction, I think
19:28:50 <kmc> i guess not all hazards can be bypassed
19:29:53 <madbr> generally if you have an ADD then a LOAD
19:30:00 <madbr> that takes 2 cycles no matter what
19:30:16 <madbr> ex : address calculation
19:30:22 -!- carado has joined.
19:30:31 <Fiora> I think like, the idea is to bypass everything you can, use delay slots where you can't (maybe), and then interlock things where you can't really or don't want to use delay slots
19:30:35 <Fiora> like a cache miss might be an interlock?
19:30:38 <madbr> something like add rx, [ra + rd] will have 3 cycles latency on the result
19:33:12 -!- carado has quit (Client Quit).
19:35:02 -!- carado has joined.
19:35:35 <Fiora> I think intel actually has some performance counters for cases when speculative loads need to be cancelled because an earlier store aliased it
19:36:09 <Fiora> there's also a thing where a store's address is a multiple of 4K away from a load's address, they falsely alias
19:36:43 <madbr> looks cache line based :D
19:36:48 <Bike> because that's how the pseudohash for the cache table works, probably
19:39:35 <kmc> that sucks
19:39:41 <kmc> won't that happen a lot
19:39:49 <kmc> if you are copying between aligned arrays
19:40:59 <Bike> obviously you need to make sure that the arrays are aligned "but not too much"
19:41:04 <Fiora> I think it's generally good to avoid having things offset by *exactly* that multiple
19:41:17 <Fiora> like. don't make the stride of your array 2048 or 4096 XD
19:41:28 <madbr> it will tend to produce cacheline colocations yes
19:41:37 <Fiora> a lot of chips really dislike that
19:41:48 <Fiora> there was actually an issue in the linux kernel with bulldozer, lemme see if I can find that one
19:42:07 <Fiora> http://www.phoronix.com/scan.php?page=article&item=amd_bulldozer_aliasing&num=1
19:42:59 <Fiora> if I remember right, it has a 2-way instruction cache that's shared between two cores in a module
19:43:15 <Fiora> and if the kernel allocates instruction pages wrong, it can cause aliasing issues that bounce cachelines back and forth between the cores
19:43:26 <Fiora> since it's only 2-way
19:47:52 <Bike> Maybe if the cache lines were aligned to some weird number like seven words this wouldn't happen!
19:47:58 <kmc> there's something unsatisfying about using cache to provide the abstraction of fast large memory, and then having to think past that abstraction and understand the cache behavior in detail to get good performance
19:48:06 <kmc> seems self-defeating
19:48:10 <kmc> i mean it doesn't matter for most programs
19:49:21 <kmc> but maybe for high performance stuff, we should just admit that our computers are distributed systems composed of lots of processors with small memories, talking to each other and to a big memory
19:49:53 <Fiora> I think that's sorta true of everything in computing? hardware tries to provide a system that's fast in general, but the more your code is aware of the hardware, the faster it can be?
19:50:48 <kmc> but sometimes it seems that you have to be not just awer of the hardware, but actively subverting the clever things it tries to do, when they are not so clever for your use case
19:51:24 <kmc> maybe if i'm writing some numerical inner loop, rather than thinking hard about automatic cache behavior, i should just tell the CPU what to cache and when
19:51:32 <kmc> people already do this with prefetch instructions
19:51:44 <Bike> guess that's what Checkout is for :p
19:51:52 <Fiora> hardware prefetching is kinda iffy though, I think a lot of guides say not to use it except in rare cases
19:51:59 <Fiora> er, software prefetching
19:52:04 <Fiora> (because the hardware prefetching is really good)
19:52:07 <Bike> well you're talking about rare cases here, aren't you
19:52:11 <kmc> gcc emits it for fairly simple loops
19:52:13 <Fiora> I've found it really hard to make software prefetching useful
19:52:33 <kmc> depending on the -march setting yeah
19:53:00 <Phantom_Hoover> so is making checkout work on a modern cpu actually possible
19:53:39 <Bike> @google esolang checkout
19:54:33 <kmc> trying to remember the case i had
19:56:17 -!- oerjan has joined.
19:56:47 <Bike> "make it fast and good generally but possible to use for specific applications" is a pretty general problem for design, i guess
19:59:16 <Bike> yeah i don't even know how the hell it works
20:00:02 <Phantom_Hoover> there are all these weird behaviours that come out of nowhere
20:00:05 <Bike> should have an example bf interpreter
20:00:41 <Bike> "For each argument, create a new level 5 subunit, and execute the given code. How this command works is system-dependent." beautiful
20:01:53 -!- Taneb has joined.
20:05:47 -!- DHeadshot has quit (Ping timeout: 255 seconds).
20:05:58 -!- AnotherTest has joined.
20:06:30 -!- Nisstyre-laptop has joined.
20:12:17 <lambdabot> shachaf asked 7h 43m 2s ago: what is sprø som selleri
20:12:17 <lambdabot> shachaf asked 7h 42m 56s ago: (THX)
20:13:02 <oerjan> that's the literal meaning.
20:13:42 <kmc> i wanted to test my Python Mersenne Twister against the reference C code
20:13:47 <kmc> and ctypes made this very easy
20:14:37 <kmc> it lets you import any .so as a Python module basically
20:18:19 <Bike> just dumps the symbol table as the list of module properties?
20:19:04 <kmc> or does dlsym() dynamically on access
20:19:05 <kmc> don't know which
20:20:15 <oerjan> <shachaf> Bike: How do you talk about unions categorically?
20:20:56 <oerjan> @tell shachaf crunchy, like celery.
20:21:07 <oerjan> _disjoint_ union is coproduct, afair.
20:21:24 <Bike> yeah but he meant not-necessarily-disjoint union.
20:21:26 <oerjan> ordinary union might be a pullback or pushout?
20:21:41 <Bike> yeah he went through all that too
20:21:44 <kmc> you have to specify argument / return types yourself, naturally
20:21:56 <kmc> but for small things that's no great hardship
20:22:10 <kmc> and the alternative (parsing C header files?) would be cumbersome for small things
20:22:12 <Bike> and it coerces tuples to arrays or whatever?
20:22:23 <kmc> something like
20:22:32 <kmc> it gives you helper functions to construct mutable arrays and such
20:22:38 <kmc> http://docs.python.org/2/library/ctypes.html
20:22:46 <kmc> it's definitely easy to screw up using it because... it's C
20:23:09 <kmc> but that seems more or less unavoidable
20:24:09 <kmc> like it's probably not the right answer for binding to a huge library with hundreds of functions and structs and whatever
20:24:32 <Bike> «Sometimes, dlls export functions with names which aren’t valid Python identifiers, like "??2@YAPAXI@Z".» dig at c++? dig at c++
20:24:49 <kmc> yeah i don't even know
20:29:20 <Bike> "ctypes tries to protect you from calling functions with the wrong number of arguments or the wrong calling convention. Unfortunately this only works on Windows. It does this by examining the stack after the function returns, so although an error is raised the function has been called" uh
20:29:38 <Bike> "There are, however, enough ways to crash Python with ctypes, so you should be careful anyway."
20:42:00 -!- hagb4rd|lounge has joined.
20:42:16 -!- hagb4rd|lounge has changed nick to hagb4rd.
20:45:53 -!- ogrom has quit (Quit: Left).
20:48:26 <lambdabot> oerjan said 27m 31s ago: crunchy, like celery.
20:48:31 <shachaf> oerjan: I understood that much.
20:53:12 <oerjan> 06:53:14 <Phantom_Hoover> something that irks me: i'm not sure all those impossible geometries you can build with portals are actually non-euclidean like everyone says
20:53:15 <oerjan> 06:55:06 <Phantom_Hoover> since their curvature is still everywhere zero
20:53:28 <oerjan> i do not think "euclidean geometry" applies to everything with zero curvature.
20:54:12 <oerjan> euclidean geometry basically means your space is a vector space
20:57:08 <oerjan> euclidean geometry also has the property that (1) it looks the same everywhere (2) has trivial fundamental group.
20:58:07 <madbr> I think the difference is thas in a portal based one the non-euclidianity is concentrated in the portal junctions, where in other ones the non-euclidianity is spread over evenly (giving curvature)
20:58:43 <madbr> so your crazy portal space isn't non-euclidian locally, but globally it is
20:58:46 <madbr> or something like that
20:58:52 <oerjan> without (2) you can get things like befunge-like wrapping. without (1) you can probably get even more complicated things.
20:59:06 <oerjan> madbr: the befunge-like wrapping has no junctions.
21:00:15 <oerjan> hm it might be that (2) implies (1).
21:00:53 <oerjan> or put differently, any space with zero curvature has euclidean space as its universal covering space.
21:01:13 -!- mekeor has quit (Read error: Connection reset by peer).
21:01:17 -!- AnotherTest has quit (Quit: Leaving.).
21:01:24 <oerjan> _maybe_. i don't actually know this.
21:01:38 <madbr> tbh I'm totally out of my depth here
21:02:47 <madbr> wonder if hyperbolic befunge is possible
21:04:57 <oerjan> you'd probably base it on one of these http://en.wikipedia.org/wiki/Uniform_tilings_in_hyperbolic_plane
21:06:19 <Bike> isn't there already a space-ish language that works on R^2 that would probably be pretty easy to generalize to arbitrary manifolds
21:09:31 <oerjan> although it's not clear to me from those kind of pictures whether the graph of connections for these tilings are really different from what you can get on a euclidean plane, without which there wouldn't be a substantial difference
21:10:50 <oerjan> heh, hyperbolic geometry allows infinite-sided regular polygons :P
21:11:11 <madbr> hyperbolic mapping would look something like
21:11:25 <Bike> but is it turing complete
21:11:26 <oerjan> that would probably just give an infinite binary tree as the graph, though.
21:12:52 <madbr> would look something like
21:12:52 <madbr> |||||||||||||||||||||||||||
21:12:52 <madbr> | || || || || || || || || |
21:12:52 <madbr> ||| |||||| |||||| |||
21:12:52 <madbr> | | | || | | || | | |
21:12:52 <madbr> ||||||||| |||||||||
21:12:53 <madbr> | || || | | || || |
21:13:06 <madbr> where | = somwhere where it's valid to put an opcode
21:13:36 <madbr> using a tiling of squares where vertexes are surrounded by 5 squares instead of 4
21:13:37 <oerjan> why i am logged out of wikipedia again, maybe i forgot to tick that box yesterday.
21:14:26 <oerjan> madbr: i'd assume you'd put opcodes on the vertices, since the tilings are transitive in those
21:14:51 <oerjan> although the regular ones are also transitive on faces and edges
21:15:44 <Bike> "Befunge/index.php" oh i forgot about that
21:17:08 <Bike> ugh why isn't there a Category:Shit That Works On Weird Spaces
21:17:48 <Bike> also Category:Articles With No Mention Of Brainfuck
21:18:15 <Taneb> Wouldn't having that category mention brainfuck?
21:18:59 <Bike> Obviously the category wouldn't include itself.
21:19:14 <Bike> (it's a small category after all)
21:19:48 <oerjan> well it says "With No Mention Of Brainfuck", not "With No Mention Of Themselves"
21:20:26 <oerjan> is Bike groaning so hard he cries?
21:20:48 <madbr> 4:6 would be easier to map to ascii
21:20:52 <Bike> actually my saline tubes are just leaky
21:21:02 <madbr> map everything to [3,2] ascii groups
21:21:35 <oerjan> madbr: what's that notation from
21:21:51 <Bike> obviously the language should be composed of analytic functions. any closed path halts because the result is zero!!
21:21:54 <madbr> just making it up on the fly
21:21:57 <oerjan> alternatively, what the heck does that mean
21:22:13 <madbr> ok normal space is 4:4
21:22:41 <madbr> 4 sided, 4 corners per vertex
21:22:48 <madbr> that's a square grid
21:22:50 -!- monqy has joined.
21:23:23 <oerjan> Bike: well the {4,4} tiling on the page i linked is euclidean, maybe that's what he means.
21:23:31 <madbr> 4:5 is a hyperbolic surface
21:24:00 <madbr> oh yeah let's use {} instead
21:24:09 <oerjan> madbr: if you call those {4,3} and {4,5} you will be using the same notation as the article... right
21:24:14 <madbr> {4,4} is a square grid
21:24:31 <madbr> {4,5} is a huperbolic space corresponding to the pattern I pasted above
21:24:46 <madbr> where you have groups of 5 squares around a corner
21:24:57 <madbr> which you could notate as
21:24:58 <oerjan> those are all in the row number 4 (but actually second) of the regular example pictures
21:24:59 <Bike> I'm kind of lost as to how cutes aren't {4,8}.
21:25:18 <madbr> cubes have square faces
21:25:21 <Bike> also they have six sides?
21:25:29 <oerjan> Bike: because each vertex has 3 edges connected
21:25:33 <madbr> yeah but they have 8 corners
21:25:43 <madbr> each corner connects 3 edges
21:25:44 <Bike> but a square has two edges on each vertex.
21:26:01 <madbr> each corner is surrounded by 3 squares
21:26:20 <madbr> like, there are 3 lines that join at each corner
21:26:41 <oerjan> equivalently, each vertex neighbors 3 faces
21:26:56 <madbr> each corner is connected to 3 different other corners
21:27:30 <oerjan> Bike: each corner on the _cube_ has 3 neighboring edges and 3 neighboring faces
21:27:54 <Bike> So how are squares {4,4}.
21:27:58 <madbr> if you change that 3 for 4 (giving you {4,4}), then your cube turns into a flat grid
21:28:13 <madbr> it's not just squares, it's a grid of squares
21:28:47 <Bike> But if you have a grid of cubes shouldn't that be {4,6} since each vertex is at six edges or six faces.
21:29:10 <madbr> no not a grid of cubes
21:29:56 <madbr> like, we're looking at surfaces
21:30:04 <madbr> the surface of a cube is 2d
21:30:12 <madbr> the surface of a square grid is 2d
21:30:26 <madbr> a grid of cubes is not a surface, it's a volume and it's 3d
21:32:47 <oerjan> madbr: hm those {4,n} n>=5 tilings _should_ be good for a hyperbolic befunge
21:33:21 <madbr> {4,6} is easier on ascii actually
21:33:29 <oerjan> and being quadrilateral, not too much change in how things work locally
21:33:35 <madbr> {4,5} you represent a group of 5 tiles as
21:33:54 <madbr> and if you apply that recursively then you fill your text file
21:34:55 <oerjan> there would still be 4 directions to go from each face. although those directions would no longer be globally consistent.
21:35:18 <madbr> yeah I think if you just keep going in the same direction you spin
21:35:27 <madbr> {4,6} would be easier
21:35:36 <madbr> you'd represent a group of 6 squares as
21:35:41 <oerjan> so you would probably need to take care how things are placed in cells, you want to distinguish rotated characters
21:35:46 <madbr> so your tiling fills the text file
21:35:52 <oerjan> and p and g needs some careful consideration :P
21:36:54 -!- epicmonkey has quit (Ping timeout: 272 seconds).
21:39:36 <oerjan> <madbr> yeah I think if you just keep going in the same direction you spin <-- i don't think so, if you go between faces and always leave at the opposing edge you entered, it seems to me like all the {4,n} tilings give an infinite path
21:40:08 <madbr> there's no opposing edge in {4,5} I think
21:40:19 <madbr> tho there's one in {4,6} yes
21:40:21 <oerjan> otoh if you rotate right after each move, you will spin in _n_ steps, rather than 4 for the euclidean case
21:40:39 <oerjan> madbr: sure there is, they're still quadrilaterals
21:40:53 <oerjan> i am assuming cells are faces, not vertices
21:40:59 <madbr> damn this is hard :3
21:41:49 <oerjan> the contents of a cell need to be an opcode and a direction that opcode is facing
21:41:57 <madbr> I can't figure out an ascii mapping for surfaces that aren't {4,n}
21:42:39 <oerjan> well surfaces that aren't {4,n} would mean each cell has != 4 neighbors, which would make it locally much more different from ordinary befunge, i think
21:42:41 <Bike> map to an ascii svg description of the surface
21:43:10 <madbr> bike: how the hell do you edit that!
21:43:22 <Bike> in your text editor obviously
21:43:24 <Bike> xml is the future!
21:43:32 <oerjan> madbr: well the euclidean {6,3} is not too hard so maybe other {6,n} can be done too?
21:43:36 <Bike> THE FUTURE MADBR
21:44:01 <Bike> the future is going to leave you behind, at 2 radians from zero
21:44:03 <madbr> oerjan: ok how do you do {6,4}
21:44:20 <oerjan> ...i didn't say _i_ could do it :(
21:44:36 <Bike> totally unrelatedly "Adaptive Control of Ill-Defined Systems" and "Arrows, Structures, and Functors: The Categorical Imperative" were written by the same guy, cool
21:44:42 <zzo38> Use whatever data format is good for your use, whether it is JSON, XML, SQL, MIDI, etc
21:45:02 <oerjan> madbr: ...um you cheat and use the {4,6} one with dual markings, or something :P
21:45:30 <oerjan> (i assume they must be dual vertex-face-wise)
21:46:01 <Bike> i feel like i should be angry about appropriating kant for a dumb pun though
21:47:16 <oerjan> (x,y) coordinates won't work with this i think...
21:47:38 <madbr> ok how would this work recursively
21:48:11 <madbr> you'd start with a group of 4 hexes
21:49:33 <oerjan> what you'd want to know i think, is the translation/rotation group of this, so you can calculate when you are returning to the same spot
21:50:19 <oerjan> as i noted, (rotate-right move-forward)^n = identity
21:50:40 <oerjan> while move-forward itself seems to be infinite order
21:51:20 <oerjan> ...you'd want the group elements to be your coordinate system, i guess
21:52:05 <oerjan> this being one of those torsors we have previously discussed here not too long ago
21:52:22 <oerjan> (or well, zzo38 asked about something for which they are the answer)
21:55:13 <oerjan> or hm you could use ^><v as your generators, except they would depend on what way you were oriented when entering a cell
21:56:39 <oerjan> hm could be that {4,n}, n odd allows you to do arbitrary rotation by moving, while n even only allows you to do 180 degrees
21:57:46 <oerjan> in {4,5}, >v<^> gets you back but rotated 90 degrees left
21:58:59 <oerjan> in {4,6}, >v<^>v gets you back upside down
21:59:33 <oerjan> these are commands interpreted according to your current facing direction.
21:59:48 <oerjan> ...which is _different_ from your befunge direction of moving :P
22:06:23 <oerjan> {4,5} and {4,6} have their own wp articles, which link on to _this_ http://en.wikipedia.org/wiki/Coxeter-Dynkin_diagram#Hyperbolic_groups_in_H2 ARGH
22:08:12 <Bike> `quote water memory
22:08:14 <HackEgo> 276) <zzo38> elliott: I doubt water memory can last for even one second in a gravitational field (or even outside of a gravitational field), but other people think they can make water memory with telephones.
22:08:15 <oerjan> http://en.wikipedia.org/wiki/Order-6_square_tiling has an escher picture :)
22:11:55 <oerjan> madbr: oh hm this should mean {4,8} will _not_ rotate your direction if you return, meaning it has globally consistent directions, i think
22:14:56 <oerjan> hm you could think of this moving around as an infinite ternary tree
22:18:17 -!- agony has joined.
22:18:28 -!- agony has changed nick to AgonyLang.
22:20:21 <AgonyLang> Hi all, I've just published my first esoteric language on esolangs: http://esolangs.org/wiki/Agony
22:21:06 <AgonyLang> It is another (there are so many) brainfuck related language, in this case actually backwards compatible mostly
22:21:23 <Bike> yes, yes there are so many
22:21:58 <AgonyLang> This version supports self modifying code, so increase the agony making something
22:23:39 <Taneb> AgonyLang, Phantom_Hoover won't be happy. He's got a Tumblr which is pretty much him hating brainfuck derivatives
22:25:02 <oerjan> and the symmetry group would be about how to identify parts of that
22:25:19 <AgonyLang> People can hate all they want, their problem, not mine
22:26:19 <Taneb> `msg HackEgo `? Phantom_Hoover
22:26:21 <HackEgo> /home/hackbot/hackbot.hg/multibot_cmds/lib/limits: line 5: exec: msg: not found
22:26:55 <Bike> `? phantom_hoover
22:26:57 <HackEgo> Phantom Michael Hoover is a true Scotsman and hatheist.
22:27:23 <zzo38> AgoryLang: Well, you are currect, but still I also think there are already too many, but don't let that stop you
22:29:26 <zzo38> I may not agree with what you have to say, but will defend your right to say it.
22:29:48 <Bike> voltaire had brainfuck derivatives in mind when he said that.
22:29:52 <AgonyLang> You have to start somewhere, I wanted to make something with very limited instructionset, able to run in a core so I can battle like CoreWar (joust is very limited), and self-modifying
22:30:02 <zzo38> Voltaire didn't say that, it was his friend.
22:30:13 <zzo38> Also, he didn't have brainfuck derivatives in mind.
22:30:39 <Taneb> AgonyLang, check out http://esoteric.voxelperfect.net/files/fyb/doc/fyb_spec.txt
22:30:50 -!- augur has quit (Remote host closed the connection).
22:31:15 <Taneb> This new language reminds me of a song..
22:31:20 -!- augur has joined.
22:31:44 <Phantom_Hoover> AgonyLang, also, am i correct in assuming that there are only 16 addressable memory cells in this thing
22:31:52 <Taneb> http://www.youtube.com/watch?v=UAPJTik5mSo
22:33:30 <AgonyLang> Phantom_Hoover: The core can be much larger, but cells just have 4 bits -> 16 possible states
22:34:45 <Taneb> Phantom_Hoover, while that statement has been repeatedly established, what in particular did you have in mind?
22:35:01 <AgonyLang> Taneb: FukYorBran is closer to what I had in mind indeed, didn't see it on esolangs
22:35:24 <Taneb> AgonyLang, it's not as popular as BF Joust, because (perhaps) of its complexity
22:36:10 -!- augur has quit (Ping timeout: 256 seconds).
22:36:45 <Phantom_Hoover> AgonyLang, but... there's all this stuff about addressing.
22:37:09 <AgonyLang> Taneb: Sure, especially with multiple processes. Agony was implemented in two hours max, the rules are a bit simpler I think, and leverages the self-modifying aspect a bit more. There is also Self-modifying brainfuck, but that is based on characters only
22:37:51 <AgonyLang> Phantom_Hoover, that is machine code mapping, the instructions have a binary counterpart
22:39:10 <AgonyLang> Phantom_Hoover, so for example <- (two instructions) are in binary 0100 1000, so if the memory pointer points to the "-" instruction and you call "." (character print) it prints "H" (01001000)
22:40:01 <AgonyLang> The description can be improved I see (assumptions all over the place)
22:40:32 <AgonyLang> So the Hello World! looks like this: <[.<]$$$,{}~<~)+{~*@+{$~*~)~)~@<-
22:41:06 <Phantom_Hoover> As Brainfuck derivatives go, it's not all that bad, really.
22:42:02 <AgonyLang> The funny thing is it runs quite a lot of Brainfuck programs without problems, this is just a deeper level adding the machine code memory mapping
22:42:29 <Taneb> You'd need to filter out the comments
22:42:47 <AgonyLang> Haven't implemented (nor specified) comments indeed
22:43:07 <Taneb> And most brainfuck programs assume right-infinite tape
22:43:20 <Taneb> Because some implementations don't have left-infinite tape
22:43:54 <Taneb> Interestingly, P'' (an early computational model that was virtually the same as brainfuck) has left-infinite tape
22:43:56 -!- aloril has quit (Ping timeout: 276 seconds).
22:44:06 <AgonyLang> Yeah, that is why I've specified putting the pointer to the right, after that having a big circular core helps
22:44:45 <Taneb> Anyway, what brought you to esolanging?
22:45:17 <AgonyLang> Pfft, I've been doing corewars for a long time, and Redcode basically it an esolang
22:45:39 <AgonyLang> And I've played with whitespace/bf for a while some time ago, and now I just had spare time
22:46:18 <Taneb> Check out Piet! http://www.dangermouse.net/esoteric/piet.html
22:46:20 <AgonyLang> Reading an article about somebody using generic algorithms in BF to create Hello World! re-spraked interest
22:47:16 <Taneb> http://www.dangermouse.net/esoteric/piet/helloworld-mondrian-big.png <-- hello world in Piet
22:49:26 <AgonyLang> Also, a friend is esoteric language fan as well, he implemented various esolangs in Redcode (for example Underload: http://corewar.co.uk/assembly/underload.htm)
22:56:26 -!- aloril has joined.
22:58:09 -!- augur has joined.
23:03:12 <oerjan> `addquote <Phantom_Hoover> As Brainfuck derivatives go, it's not all that bad, really.
23:03:17 <HackEgo> 977) <Phantom_Hoover> As Brainfuck derivatives go, it's not all that bad, really.
23:05:34 -!- augur has quit (Remote host closed the connection).
23:09:45 <AgonyLang> Going to meet someone in 15 minutes who has two deaf parents, so I'm now learning a bit of sign language as a surprise, pretty esoteric as well ;)
23:10:15 <Bike> imo learn nicaraguan sign language
23:10:47 <Taneb> Make sure you learn the right damn sign language
23:10:54 <Taneb> ASL and BSL are very very different
23:11:00 <Taneb> And NSL is even more different
23:11:14 <Taneb> Other than ASL, I don't know if those acronyms are ever used
23:11:23 <Bike> what's bsl, british?
23:11:52 <impomatic> I've picked up a bit of sign language from watching kid's TV! :-(
23:13:19 <zzo38> If they are shows about sign language, then I suppose it can help a bit.
23:15:00 <hagb4rd> watching news supported by sign language..have you ever noticed, how accurate the gestures are? it's like a perfect reduction unrevealing the essence of the issue
23:15:20 <oerjan> esolang wiki, now with a genuine van rijn piece
23:15:51 <hagb4rd> busting all that rhetorical figures and euphemisms
23:21:36 <oerjan> kmc: is it boiled with charcoal?
23:21:41 <kmc> probably not
23:27:23 <Taneb> Could I piet program be represented as a map from some key to an 8-tuple of Maybe PietEffect?
23:27:34 <Taneb> Maybe (PietEffect, Key)
23:31:16 <Taneb> So... StateT Key Maybe PietEffect
23:36:24 <Taneb> Key -> (Maybe (PietEffect, Key)) * 8
23:36:50 <lambdabot> <hint>:1:6: parse error (possibly incorrect indentation)
23:37:06 <Taneb> > ((18 * k) + 1) * 8
23:37:14 <Taneb> > (((18 * k) + 1) * 8) ^ k
23:38:47 <HackEgo> olist: shachaf oerjan Sgeo
23:39:03 <oerjan> has he reached 9 in a row yet?
23:39:55 <madbr> {4,6} is so hard to turn into a uniform grid :O
23:39:57 <Sgeo> This is the 6th
23:40:08 <Sgeo> 3 more before it's the 9-in-a-row
23:42:40 <Taneb> oerjan, key to a colour block
23:48:41 <Sgeo> I think syntax sugar helps me learn things :/
23:48:50 <Sgeo> do notation gives me an intuition about monads
23:49:24 <oerjan> leibniz notation gives people an intuition about calculus.
23:51:47 -!- cantcode has joined.
23:55:54 <HackEgo> CaNtCoDe: WeLcOmE To tHe iNtErNaTiOnAl hUb fOr eSoTeRiC PrOgRaMmInG LaNgUaGe dEsIgN AnD DePlOyMeNt! FoR MoRe iNfOrMaTiOn, ChEcK OuT OuR WiKi: HtTp://eSoLaNgS.OrG/WiKi/mAiN_PaGe. (fOr tHe oThEr kInD Of eSoTeRiCa, TrY #eSoTeRiC On iRc.dAl.nEt.)
23:58:32 <oerjan> hm sadly the WeLcOmE algorithm hasn't been properly refactored
23:59:18 <HackEgo> #!/bin/sh \ welcome $@ | python -c "print (lambda s: ''.join([ (s[i].upper() if i%2==0 else s[i].lower()) for i in range(len(s)) ]))(raw_input())"
23:59:54 <Jafet> @type zipWith id (cycle [toUpper, id])