00:00:00 North America, yes :-) 00:00:04 Guessing the most peopled state has more than that probably. 00:00:26 I think CA is the most densely populated, but AK is bigger geographically 00:00:32 Rugxulo: are you assuming that I'm going to come and rape you? 00:00:34 Does busybox's dd print out statistics on SIGUSR1 (like the usual one does), or does it just die? 00:00:35 or maybe TX is most populous with CA next, can't remember 00:00:37 that would be rather unlikely 00:00:47 alise, depends on where *you* live ;-) 00:00:54 Never mind, it just finished. 00:01:05 -!- Oranjer has left (?). 00:01:35 Rugxulo: England. 00:02:28 oh noes, it's Crystal "Windows 7 was my idea" !!!! RUN! 00:02:43 Wow, you truly are infuriating to talk to. 00:02:58 alise, sorry ... you just never said why you wanted to know 00:03:15 so I can track you down knowing only your state and stalk you forevermore 00:03:45 The only flaw in my plot was that I hadn't counted on you not telling me! 00:03:50 -!- adam_d has quit (Ping timeout: 245 seconds). 00:03:51 Foiled again! 00:04:25 all I know is "you have a theory", which I blindly guess had something to do with my offhand comment about "the universe is expanding (but how do they know that?)" 00:04:31 paranoid people are always foiled 00:04:56 use physics, it should tell you everything (even where I live), right? :-P 00:04:57 I think I've made myself too stalkable 00:05:11 Sgeo_: indeed, Seth 00:05:27 Rugxulo, um, I guess if someone had perfect knowledge of the state of the universe.. And even then, it would stop being helpful eventually 00:06:14 Well, probably just need the Earth, or even just the US for a one-time snapshot 00:06:15 BTW, befi seems much faster on mandel than others (even ccbi) 00:06:27 Erm, US and whichever Freenode server you're connected to 00:07:31 -!- alise__ has joined. 00:08:41 I hate this connection. 00:09:35 alise__, EDGE? =P 00:10:37 -!- alise_ has quit (Ping timeout: 258 seconds). 00:13:07 -!- alise__ has quit (Ping timeout: 240 seconds). 00:15:06 poor alise, poor nosy nosy alise ... 00:25:48 -!- alise__ has joined. 00:28:46 wb 00:29:28 http://www.pastebin.org/155564 00:29:36 ccbi speeds up, but befi slows down :-/ 00:29:50 it's a cpu-specific problem, though 00:30:41 admittedly, I was targeting size over speed, so I don't majorly care, just curious ... 00:31:08 http://board.flatassembler.net/topic.php?t=10810 00:31:41 Rugxulo: Well, I got something booted, though now the rotated display works hellishly slowly. Anyway, the benchmark; is it just a loop instead of wrapping? 00:31:52 Rugxulo: Did you notice my boundary-tracking results re the earlier one? 00:32:06 results? no, don't recall that 00:32:17 See http://pastebin.com/xQFwEaUu 00:32:38 yes, it's just a manual loop instead of wrapping, which apparently causes something to bork on P4 and AMD64 (self-modifying code, perhaps? or maybe cache issues?) 00:33:05 and yet my P166 is like 30% faster, go figure 00:33:48 okay, so what exactly does -DBOUNDARY_TRACKING do?? 00:35:33 The curious bit was non-LLVM gcc's slowness on -DPF_X=15 static playfield-size diminishment vs. -DBOUNDARY_TRACKING, where the only difference in the actual time-wasting part of the code is that in the "wrap" command at the edge, the subtract-from-IP value is a compile-time constant for -DPF_X=1, but a file-scope "static int pf_box_w" for -DBOUNDARY_TRACKING. Well, *32 or so that, actually, if we're speaking bytes. 00:36:32 what GCC? 00:36:48 4.4.2 -march=pentium was loads slower (on a Pentium!) than -mtune=pentium4 00:36:55 (for original bef.c) 00:37:10 I'm talking like ten minutes slower (32 vs. 22, or something like that) 00:37:24 gcc (Ubuntu 4.4.3-4ubuntu5) 4.4.3, llvm-gcc (GCC) 4.2.1 (Based on Apple Inc. build 5658) (LLVM build). I think these were plain -O3, could try -march=native or some-such. 00:37:25 must be a regression, as I don't think 3.4.4 had that problem 00:38:52 Oh, and what it generated for the wrapping opcode was http://pastebin.com/caKbRGzh -- where in the upper version .L281 is immediately followed by a jmp **%rax. 00:39:55 you're on Athlon64, right? 00:39:59 Yes. 00:40:24 That double-jump is a bit strange construct anyway. 00:40:43 so -DBOUNDARY_CHECKING is faster ... but that code looks much more complex than the other bit 00:40:52 it must be something else, it can't be that 00:40:57 Yes, but it doesn't have the very silly double-jump the other bit has. 00:41:08 jmp .foo; .foo: jmp *%rax. 00:41:45 I didn't diff the whole assembly; the C code didn't change very much, but of course the compiled thing might've changed a lot. 00:41:59 what GCC version, 4.4.3? 00:42:02 I wanted to try clang, but my version of it (2.7~svn20100317-0ubuntu1) has a bug on the &&label thing. 00:42:16 The same I mentioned just five minutes ago. 00:42:23 oops ;-) 00:42:38 try "-mtune=generic -O2" 00:42:45 (sometimes -O3 is worse) 00:42:54 I also shared the thing at http://git.zem.fi/ff 00:44:09 And in fact I think the current BOUNDARY_TRACKING is broken; it wraps wrong if the edge of the program has a #. (The bounding-box boundary's only one cell deep; should be two, like the fixed one around the playfield is.) 00:44:45 I haven't tested it with anything that'd actually p outside the program's bounding-box. 00:46:49 BTW, mandel.bf doesn't seem to work 00:48:07 With what? 00:48:13 Or just in general? 00:48:16 ff3 00:48:50 Right; I've only tested it with rot13, your benchmark, and Mycology's 93 part, which isn't very comprehensive. 00:49:43 Do you have something it works with, and can output a trace of executed instructions, and compiles well on Unixy things, and so on? 00:51:07 output a trace? not sure 00:51:18 but mandel.bf is an official example on Cat's Eyes' site 00:51:27 http://www.pastebin.org/155608 00:51:51 original bef.c should compile fine on *nix, esp. since Pressey is apparently such a FreeBSD fan 00:52:39 That's funny results too, with the ff3b case. 00:52:45 yes 00:53:10 a P4 is a strange animal 00:53:22 AMD64 usually isn't as braindead, but it has weird corner cases too 00:54:39 hmmm, bef 2.2.1 has "-s stack" to write stack contents to file, but even after Ctrl-C (half finished, it was slow), the file is still 26 MB! :-P 00:55:23 Hrm, I get "Unsupported instruction 'ÿ' (0xffffffff) (maybe not Befunge-93?)" from bef.c all the time on mandel.bf. 00:55:24 lots of "g" and "p" and multiplication in there, so maybe it does some wraparound or relies on undefined behavior 00:55:34 (sorry, wraparound as in integer overflow) 00:55:37 use -q 00:55:41 that'll shut it up ;-) 00:56:01 CCBI also works on it, if that helps 00:56:22 I think CCBI also had some sort of trace thing. 00:56:23 not sure what or if it supports for outputting to file, but it does let you -t (trace) 00:56:29 I guess I could get a binary. 00:56:40 Conference of Catholic Bishops of India 00:56:47 Always hard to google it. :p 00:57:02 http://users.tkk.fi/~mniemenm/files/befunge/interpreters/ccbi/ccbi-linux-x86-32.txz 00:57:10 heh 00:57:39 (needs p7zip to unpack, I can re-pack as .ZIP or .tar.gz and upload to RapidShare if you need it) 00:57:53 lol. 00:58:05 actually, GNU tar supports .xz now, I think (not sure, never tried) 00:58:21 "tar Jxvf" worked just fine. 00:58:30 I think it does need external xz-utils instealld. 00:58:53 Hopefully ff3 and ccbi will diverge soon, there's quite many instructions to trace through otherwise. 00:58:55 why he didn't just pack as .7z is beyond me 01:01:15 oh, BTW, I think I increased the IF_X and IF_Y to 20 and 2, respectively, since I thought benchmark2.bef might need it (might explain the speed diff in ff3b) 01:01:30 anyways, I gotta jet, bbl eventually ;-) 01:02:08 -!- Rugxulo has quit (Quit: Rugxulo). 01:02:14 I guess you mean PF, not IF; but yes, at least the Y=2 it needs. 01:03:08 -!- Oranjer has joined. 01:05:27 Boxes. 01:05:27 Boxen. 01:05:27 bf10. 01:05:31 Must. Implement. Linear. Loops. 01:08:29 -!- augur has quit (Remote host closed the connection). 01:09:49 -!- augur has joined. 01:11:42 annoyingly i don't think my parser architecture can support such an advanced optimisation by itself 01:11:44 (augur) PARSER 01:11:45 -!- FireFly has quit (Quit: Leaving). 01:11:50 that means you have to help me 01:11:52 since it's PARSER-RELATED 01:12:02 and you're ALMOST SORT OF related to PARSERS 01:13:17 Thailand went through a bloodless coup while the President was out in the United States to address the United Nations. What do you think? 01:13:17 [..] 01:13:18 "This is all in accordance with Thailand's 'Whoever sits in the President's chair is the President' policy." 01:13:20 http://www.theonion.com/articles/bloodless-thai-coup,15074/ 01:13:21 :o 01:13:24 PARSERRRRRRS 01:13:32 Best policy ever. 01:18:40 Rugxulo: in case you logread; strange about that mandel.bf; ccbi's trace of it has numbers like 7398752256 in stack. It might be depending on some particular wraparound; I get a differently broken output with -DSTACK_TYPE='unsigned int' (or signed/unsigned long) from ff3 than with the default 'int'. 01:25:33 clearly a complex program 01:28:27 Rugxulo: Okay, ff3 runs mandel.bf correctly if you compile it with -DPF_TYPE='signed char', or just 'char' on proper systems/compilers -- it seems to store numbers in range [-128, 127] to the playfield and expect to get them back as-is. (My PF_TYPE default is "unsigned char".) 01:28:38 alise__, I'm going to reveal to you what my dad's big idea was. I'm pretty sure something like it already exists, otherwise http://www.androidzoom.com/android_applications/shopping/shopsavvy_eai.html couldn't really exist 01:29:02 Rugxulo: I think that's a bit wrong; the Funge-98 specification says "Befunge-93 defines signed 32-bit stack cells and unsigned 8-bit Funge-Space cells." 01:29:03 was that the big idea that would make him five millions? 01:29:08 and millions and millions? 01:29:15 Rugxulo: Admittedly I have no clue where Befunge-93 specifies that; certainly not in the official spec. 01:29:42 It was the big idea that he wanted me to keep secret, don't remember if he thought it would make us rich 01:29:54 billions and billions 01:30:55 -!- BeholdMyGlory has quit (Read error: Connection reset by peer). 01:31:32 WELL YOU BLEW IT NOW 01:33:57 Sgeo_: your dad sounds like one of the managers from thedailywtf 01:34:34 Rugxulo: If you want some mandel.bf times, http://pastebin.com/MA5RjDL3 01:34:44 Rugxulo: It doesn't quite work with -DBOUNDARY_TRACKING at the moment. 01:34:59 (And probably tighter boundaries wouldn't much help.) 01:36:58 Ugh; this is gonna be really hard. 01:37:06 I guess linear loops should be done after parsing. 01:37:26 Oh, I can't. 01:37:30 The length of the loops might change. Grr.. 01:38:29 * Sgeo_ is starting to get upset at how there are certain apps for iPhone but not Android 01:38:47 Not just Robozzle 01:40:55 -!- oerjan has quit (Quit: Good night). 01:46:37 Apparently, the Nexus One screen doesn't look good in direct sunlight? 01:47:11 Yes, I read about that. A lot of screens don't. 01:47:18 Didn't you use one? 01:49:50 Not in direct sunlight really 01:50:25 -!- Mathnerd314 has joined. 01:53:26 Rugxulo: Not much difference there, http://pastebin.com/BqmeZXUq -- I don't think it wraps all that much. 01:58:43 (WW) RADEON(2): Direct rendering disabled 01:58:44 (II) RADEON(2): Render acceleration disabled 01:58:44 No great wonder the rotated display is now horribly slow. 02:01:02 -!- alise__ has quit (Ping timeout: 258 seconds). 02:06:31 -!- Asztal has quit (Ping timeout: 252 seconds). 02:12:53 -!- alise__ has joined. 02:24:35 * uorygl tries to remember why if you do 9999*9999 and add the left and right halves of the result, you get 9999. 02:24:50 magic 02:25:48 Because that's the ones' complement square of negative zero? That doesn't sound like the answer I got last time. 02:26:15 Last time, it was in binary, not decimal. 02:26:49 -!- cheater3 has quit (Ping timeout: 246 seconds). 02:27:44 Last time, the answer was the difference of squares expression 9999^2 - 1^2 + 1. 02:28:29 -!- augur has quit (Ping timeout: 276 seconds). 02:31:59 -!- cheater2 has joined. 02:32:53 -!- Oranjer1 has joined. 02:36:09 -!- Oranjer has quit (Ping timeout: 264 seconds). 02:37:10 Does anyone use the ASCII notation (Σn∈[0,∞] n/n!) for sum n=0 to infinity, n/n!? I guess it's not all that clear that n is an integer there. 02:37:54 (Σ(0≤n≤∞) n/n!) would also work, although that is not as pretty. 02:47:02 I didn't realise Spivak wrote AMS-LaTeX! 03:01:06 -!- Oranjer1 has changed nick to Oranjer. 03:09:02 My unintelligently designed notation: (Σ_n=0^∞ n/n!) 03:09:42 I think it's understood that you're not taking the sum over all nonnegative real numbers, especially since ! is only defined for integers. 03:09:56 But [0,∞), please. 03:13:44 Pfft, you just don't appreciate infinity divided by the factorial of infinity. 03:14:06 The problem with (Σ_n=0^∞ n/n!) is that, well, it's sort of unreadable compared to the two-dimensional notation. 03:17:21 -!- augur has joined. 03:17:29 -!- alise_ has joined. 03:20:46 -!- alise__ has quit (Ping timeout: 258 seconds). 03:21:41 -!- Tritonio_GR has quit (Read error: Connection reset by peer). 03:23:47 -!- diofeher has joined. 03:24:45 Hi diofeher; you new? 03:26:06 hi alise_ 03:26:08 yes, i'm now 03:26:10 new* 03:26:23 i'm doing a brainfuck programming... to show my love to my girlfriend 03:26:25 haha 03:26:34 alise_: http://gist.github.com/369963 03:26:59 o-kay 03:27:26 alise_: what are you programming here? 03:27:37 magic unicorn babies 03:29:16 alise_: hahaha 03:29:21 Flash games, or iPhone games? 03:29:25 what do you all talk here? 03:29:52 diofeher: magic 03:29:56 & also flower-picking 03:30:01 these are two things very important to us 03:30:24 Esoteric languages is the official topic. There's ususally talk of advanced computer science and math topics 03:31:05 understood alise :P 03:31:17 nice Sgeo_ ... advanced computer science seems nice to me 03:33:25 mostly though it's just me being annoying 03:50:37 -!- Mathnerd314 has quit (Ping timeout: 240 seconds). 03:55:44 Someone suggested I make a Befunge wallpaper 03:55:47 live wallpaper 03:59:05 -!- alise_ has quit (Ping timeout: 258 seconds). 04:04:26 -!- ze_german has quit (Ping timeout: 246 seconds). 04:07:16 -!- augur has quit (Remote host closed the connection). 04:07:34 -!- augur has joined. 04:15:45 -!- calamari has joined. 04:20:56 -!- Mathnerd314 has joined. 04:24:03 -!- alise_ has joined. 04:32:53 -!- diofeher has quit (Quit: ChatZilla 0.9.86 [Firefox 3.5.9/20100401213457]). 04:35:28 -!- alise_ has quit (Remote host closed the connection). 04:50:27 -!- augur has quit (Remote host closed the connection). 04:50:45 -!- augur has joined. 04:52:08 -!- MizardX has quit (Ping timeout: 276 seconds). 05:08:38 -!- comex has quit (Quit: leaving). 05:08:45 -!- comex has joined. 05:24:55 "This free download includes the tutorial and 15 puzzles. To unlock all the puzzles and online features you can use the In-App-Purchase store within the game." 05:25:05 [The Robozzle iPhone app] 05:25:09 Is that some sort of sick joke? 05:25:49 no 05:25:58 http://www.gamefaqs.com/mobile/iphone/home/994970.html 05:26:00 welcome to Apple world 05:27:31 Suddenly, the variety of apps available on iPhone is a bit less of an incentive 05:28:58 I mean, it would make sense if the puzzles weren't all available for free online anyway... 05:29:12 Also, he claims that there is a solution store, and I don't see how that's even physically possible 05:29:23 pikhq: ping? 05:32:19 Between that, and the fact that it's not implied anywhere else that most of the puzzles aren't free.. 05:32:50 at least you didn't pay for the app to find out that the puzzles cost more 05:33:16 * Sgeo_ might end up making his own Robozzle for Android, rather than rely on someone else who may end up charging for access to most of the puzzles 05:35:22 -!- augur has quit (Ping timeout: 265 seconds). 05:37:14 -!- coppro has quit (Remote host closed the connection). 05:39:20 http://robozzle.com/forums/thread.aspx?id=1917 me complaining loudly 05:46:01 after seeing some complex interactions between enemies in supertux, it made mew wonder about creating some kind of platform game esolang 05:50:44 -!- calamari has quit (Quit: Leaving). 05:51:51 -!- coppro has joined. 06:01:40 * Sgeo_ wonders if the RoboZZle iPhone app might be against the TOS 06:01:48 It does, after all, interpret a language 06:07:05 RoboZZle is TC, right? 06:07:50 Given the paint commands, yes 06:08:03 Another esotericer implemented Langton's Ant, which I think means TC 06:08:23 [given obvious memory issues ofc] 06:08:55 hmm... actually, it's pretty trivial to show that it is 06:09:13 if you have infinite program space and playing field, of course 06:10:19 each column is a slot on the tape; the programming language is sufficiently powerful to distinguish between every case of a column and act accordingly 06:24:36 -!- coppro has quit (Remote host closed the connection). 06:35:15 -!- Oranjer has left (?). 06:37:04 -!- coppro has joined. 06:54:01 -!- augur has joined. 06:56:17 hayo 07:11:18 -!- Rugxulo has joined. 07:11:58 I've always considered it [frame pointer] as more a tool for humans and don't understand why compilers use it by default 07:12:03 I don't think x86-64 does use it by default 07:12:30 but normally they avoid it on IA32 because it makes debugging impossible (plus using ESP reg instead uses more output byte space) 07:12:47 -O3 -march=k8 -msse3 -fwhole-program 07:12:55 newer GCCs support "k8-sse3" target 07:13:03 x86-64 doesn't, true 07:13:12 however, I've seen programs slow down a lot with -O3, so your mileage may vary 07:13:27 (even -march doesn't always help) 07:13:45 personally, I'd suggest "-mtune=generic", but that's just my opinion ... 07:14:54 comex: gdb is worse than useless. usually :P 07:15:05 at one time there was somebody writing their own debugger for *BSD, not sure how far they've come 07:15:39 admittedly, my assembly was nonoptimal (it used two 'xchg' around each putchar) but that shouldn't make a huge difference; however, the code just slowed down a lot 07:15:49 XCHG has been slow since at least 586, maybe earlier 07:15:59 and it's (still?) always atomic, hence not pairable 07:16:27 -!- Halph has joined. 07:16:27 the only reason to use it nowadays (or ever, really) is for convenience or if you really want to "xchg eax,reg32" in a single byte ;-) 07:16:40 -!- coppro has quit (Ping timeout: 276 seconds). 07:16:41 -!- Halph has changed nick to coppro. 07:16:45 Well, slow and slow 07:16:51 It's faster than a multiplication :-P 07:16:57 Rugxulo: in a single bite you mean right???? 07:17:06 no, I mean byte 07:17:16 no, you mean bite 07:17:20 like pizza rolls 07:17:45 I think its speed is mostly equivalent to the three movs it'd take otherwise 07:17:49 00000100 91 xchg eax,ecx 07:17:55 opcode = 91h 07:17:57 one byte 07:18:07 (sorry, 00000100 was the start address) 07:18:23 Deewiant, no 07:18:36 push/pop are pairable on original Pentium, xchg is not 07:18:46 Well yes, if you do push/pop 07:18:47 mov is also pairable (being pretty much the most common instruction) 07:19:04 But don't the movs depend on each other 07:19:11 sometimes, yes 07:19:51 newer cpus handle that okay, older ones have an AGI, which is still minuscule 07:20:41 push eax ; push ebx ; pop ebx ; pop eax = 1 + 1 cycles on 586, unlike xchg (3 or such, can't remember exactly) 07:20:53 it's really all these newer cpus that suck so bad that slow down old well-used optimizations :-( 07:21:03 P4, especially 07:21:11 though AMD64 ain't immune 07:21:12 Forget about P4 :-P 07:22:19 no way, there's too many still out there 07:22:30 (I'm on one now, my other is sitting a few feet away disconnected) 07:22:34 (also my aunts have two!) 07:23:06 GCC is much better targeting P4 than 586 :-/ 07:23:06 According to Agner push tends to have a higher latency than xchg 07:23:22 on P4? 07:23:32 Core 2 and i7 07:23:37 dunno about those 07:23:52 but I always read it was atomic, which meant it stalled everything 07:23:55 Not on P4, apparently 07:24:01 implied "lock" or whatever 07:24:32 I think it's only really slow if you swap with memory 07:24:40 But if you between two registers it's not that bad 07:24:47 +xchg 07:25:46 Deewiant: is that the sequence to xkcd? 07:26:04 >_< 07:28:08 *** is almost always really bad practice. 07:28:17 so is "goto" (in theory) but in practice everybody uses it 07:28:25 cdecl ftw! ;-) 07:28:34 * Rugxulo admits to not knowing or liking much, though 07:28:41 s/liking/& C/ 07:34:13 N.B. (fizzie) [ DR-DOS 7.03 ] P166 no MMX> runtime ff3 benchmk2.bef 07:34:14 2147483596 44.34 seconds elapsed 07:34:40 note that this is the one that does ">v<^" instead of wrapping around to x=0 again 07:35:00 gcc-3.4.4 -s -O2 -march=i586 -fomit-frame-pointer 07:35:45 Thailand went through a bloodless coup while the President was out in the United States to address the United Nations. What do you think? 07:35:55 isn't he also a born U.S. citizen?? 07:36:08 (oops, quoting alise) 07:37:16 huh, Conan moving to TBS, who knew? 07:38:11 Rugxulo: I think that's a bit wrong; the Funge-98 specification says "Befunge-93 defines signed 32-bit stack cells and unsigned 8-bit Funge-Space cells." 07:38:30 I'm pretty sure B93 was always intended to only be 7-bit ASCII, e.g. no support of other stuff 07:38:59 some very rare programs may try storing data in the fungespace instead of the stack, but that's not common 07:39:17 I think it was B98 that officially was 8-bit clean 07:39:46 hence, signed char (7-bit) vs. unsigned char (8-bit) 07:40:06 With access to only two stack cells, it seems like it should be very common; maybe the programs just don't do much :-P 07:40:35 there really aren't that many examples out there, probably < 200 07:40:51 (okay, so that's a bit presumptuous, how the hell do I know??? but still, you get the idea ...) 07:41:09 anyways, 1993 was well before Unicode was popular 07:41:09 Yeah, I know 07:41:56 "UTF-8 was first officially presented at the USENIX conference in San Diego, from January 25?29, 1993." -- Wikipedia 07:42:06 s/?/-/ 07:42:22 s/-/–/ 07:42:26 (Wikipedia is also UTF-8 now ... what was it before??) 07:44:24 ISO-8859-1 if the Wayback Machine is to be believed 07:51:01 guess I'll scram for now ... 07:51:04 -!- Rugxulo has quit (Quit: Rugxulo). 07:59:59 -!- clog has quit (ended). 08:00:00 -!- clog has joined. 08:00:56 -!- Gracenotes has quit (Remote host closed the connection). 08:21:19 -!- sebbu has quit (Ping timeout: 246 seconds). 08:23:26 -!- aschueler has quit (Read error: Operation timed out). 08:23:31 -!- aschueler has joined. 08:25:30 -!- HackEgo has quit (Ping timeout: 240 seconds). 08:25:34 -!- HackEgo has joined. 08:26:10 -!- EgoBot has quit (Ping timeout: 240 seconds). 08:26:14 -!- EgoBot has joined. 08:33:37 -!- Gracenotes has joined. 08:53:47 -!- iamcal has joined. 08:55:34 -!- lament has joined. 08:56:24 -!- olsner_ has joined. 08:57:05 -!- cal153 has quit (*.net *.split). 08:57:05 -!- olsner has quit (*.net *.split). 09:06:59 -!- augur has quit (Remote host closed the connection). 09:07:06 -!- augur has joined. 09:12:46 -!- sebbu has joined. 09:20:36 -!- MigoMipo has joined. 10:02:39 -!- mre has joined. 10:03:30 -!- mre has quit (Quit: I'll be back). 10:07:46 -!- lament has quit (Quit: lament). 10:16:52 -!- tombom has joined. 10:24:20 -!- kar8nga has joined. 10:41:53 -!- FireFly has joined. 10:44:07 -!- sebbu has quit (Ping timeout: 240 seconds). 10:46:13 -!- BeholdMyGlory has joined. 11:24:06 -!- Asztal has joined. 11:29:42 -!- sebbu has joined. 11:43:13 -!- oerjan has joined. 12:05:51 -!- ze_german has joined. 12:25:51 -!- Tritonio_GR has joined. 12:42:54 -!- MizardX has joined. 12:52:59 -!- diofeher has joined. 12:55:50 hey guys, brainfuck is compiled or interpreted?/ 12:57:11 diofeher: yes :) 12:57:50 or, more explicitly: either, depending on whether you use a brainfuck compiler or a brainfuck interpreter to run it 12:58:01 and both do exist 13:01:17 yes, it I was thinking olsner_ 13:01:35 but it seems very strange to me... do you know another language that can be interpreted and compiled at the same time? 13:02:36 almost every language can 13:03:48 all depends on compiler...? 13:09:22 actually, I'm pretty sure that *every* language can be both (unless it's a language that can't be either - e.g. because it's impossible to implement on a computer)... it just depends on what people like to build for it (interpreters or compilers), how easy it is to implement the compiler, and how important it is to have programs in the language run fast 13:09:56 i.e. not a property of the language itself, but rather a matter of what has been implemented 13:13:38 olsner_: oh, nice explanation... thanks :) 13:13:51 a new world opened in my mind when I discovered these languages 13:14:09 i never heard about turing machine before them 13:15:01 prepare to have your mind blown a few times then :) 13:15:07 hehe 13:15:25 olsner_: do you have any material to point me about cool stuff of computer science? 13:16:14 Forth and Lispy things are some examples that tend to have both compilers and interpreters written for them; even in the sense that people use both. (Things like C interpreters don't have that many users, I don't think, even though they exist.) 13:17:21 Though, come to think of it, they do commercially sell Ch, that C/C++ interpreter... 13:17:28 I've read plenty, but I don't remember where :) browsing around the esolang wiki, and reading all the wikipedia articles on computation, is probably a good start though 13:17:58 there's a huge neural-network simulation package based around a C++ interpreter 13:18:34 -!- oerjan has quit (Quit: leaving). 13:18:39 (and it doesn't even try to be safe, like many other interpreters, a null dereference in the interpreted code is just a SIGSEGV) 13:20:01 I've heard that people use Ch to expedite the compile/run/debug cycle. 13:22:23 nice 13:22:33 thanks olsner_ 13:23:24 Can't quite think of a compiled language that I'd be sure doesn't have an interpreter. Java's got that Beanshell thing, apparently there are some Fortran interpreters... the other way around seems more common; I'm sure there are several marginal scripting languages no-one has bothered to write a compiler for. 13:23:36 Also, the border between interpreters and compilers is very fuzzy. 13:24:15 Java is generally JITted, so some of it is compiled and some of it is interpreted (using some definition of "compiled"). Many languages are "compiled" into bytecode, but then that bytecode is "interpreted". 13:24:52 Often the compilation into bytecode is transparent, but the process is certainly still there. 13:29:02 Gregor: so if my teacher says "java is compiled", i can point and say "you're wrong, java can be interpreted too" 13:29:33 I wouldn't recommend it :P 13:30:40 Me neither; I pointed out an equally nit-picky thing about floating-point once, and it led to wasted 5 minutes of an argument that helped no-one. 13:31:03 depends on if the teacher is a dick or not 13:31:11 Java, the language, can certainly go either way. Java, the platform, is a compiler for Java and JITting interpreter for JVM bytecode. 13:31:21 hahahaha 13:32:34 if you really want to, you can ask another student to point it out, then it's not your fault for holding up the class and you can still get to hear the argument 13:32:38 any of these esoteric languages have implentation in JVM? I saw that LOLCode and Brainfuck have implementations in .NET 13:32:56 olsner_: haha nice idea 13:32:59 brainfuck can be done in any c-like 13:32:59 I guarantee there's an implementation of Brainfuck in Java somewhere. 13:33:26 Brainfuck can be done in any TC language with I/O, if you're willing to put the effort into it :P 13:35:07 hehe 13:35:13 Gregor: TC? 13:35:26 turing complete 13:35:29 ah ok 13:35:31 `google site:esolangs.org "turing complete" 13:35:42 Nov 11, 2009 ... A programming language is said to be Turing-complete if it is in the same computational class as a Turing machine; that is to say, ... \ esolangs.org/wiki/Turing-complete - [13]Cached 13:35:56 oh, nice bot 13:36:08 `beer 13:36:09 No output. 13:36:12 damn 13:36:30 pineapple: It's a writable Unix filesystem, make your own beer command. 13:36:31 talking about beer, do you all have heard about beerware? 13:36:40 yes 13:36:48 diofeher: I don't believe it to be a legally enforceable system of licensing :P 13:36:51 Gregor: :-P 13:38:39 should be! 13:39:08 I seem to have here a compiler from False to JVM bytecode, though I have no idea how complete it is; I also have no recollection of writing it, but it's got my name in the sources, so... 13:39:40 Sorry, FALSE is I guess the official name. 13:39:54 `google site:esolangs.org false 13:39:54 -!- tombom_ has joined. 13:39:55 Sep 2, 2009 ... FALSE (named after the author's favourite truth value) is an early Forth-like esoteric programming language invented by Wouter van ... \ esolangs.org/wiki/False - [13]Cached 13:40:11 yeah, thought that was the one that aardappel wrote 13:40:51 It doesn't support the backtick: ` compile short as 68000 machine instruction in the original Amiga FALSE implementation 13:41:30 lawl 13:41:41 Surely there's a 68000 simulator in Java :P 13:43:01 -!- tombom has quit (Ping timeout: 246 seconds). 13:44:24 `google site:esolangs.org amiga 13:44:27 Nov 7, 2008 ... He is the author of XPK compression package for the Amiga, and the Amiga version of CSH (the Unix C Shell). He now works for search.ch, ... \ esolangs.org/wiki/Urban_Müller - [13]Cached 13:44:54 [sensible code] else if (pendingFuncs.size() >= 1) throw new Exception("funky hunky"); else throw new Exception("dinky tonky"); -- yes, I'm afraid this might be still a bit incomplete. 13:46:01 heh 13:46:35 "in this case, the program shall terminate with the message 'dinky tonky'" 13:46:37 hehe 13:46:54 (I mean, it *could* be a legitimate language feature :P) 13:47:11 Depends on your definition of "legitimate" 13:50:14 There's an annoying safety feature in the JVM in that all code paths need to have a deterministic stack effect, and if you can reach a particular instruction through several different paths, the verifier checks that the stack changes from start of the function for all paths must be the same. Makes it more difficult to use the underlying JVM stack as the stack for the language you're compiling. 13:50:23 (Of course it's mostly a problem for stack-oriented languages only; and sure, you can always just use an int[] as the language's stack or something, but that's so inelegant.) 13:52:20 Speaking of bots, I guess I'm going to have to do the annoying advertisement bit I do for all newcomers. 13:52:22 ^source 13:52:22 http://git.zem.fi/fungot/blob/HEAD:/fungot.b98 13:52:38 That there's a useful use of an esolang. For some values of useful. 13:54:23 omg, what is this language? 13:54:46 befunge-98 13:54:56 judging by the file extension 13:55:23 fizzie: is that the source of the bot? awesome 13:55:36 I should learn me a funge 13:56:16 olsner_: Yes. (Though it's not completely trivial to run, it needs a loader file, and the babble-thing needs some fiddling.) 13:56:26 fungot: Say something clever now. 13:56:27 fizzie: who was it 13:56:32 fungot: That wasn't clever. 13:56:32 fizzie: those people like to call the functions foo, bar, baz; worries 13:57:10 That wasn't very clever either; I give up. 13:59:07 haaha 13:59:14 fungot: hi dude 13:59:15 diofeher: georgia, of course. but no sure how. if the garbage collector, for example 14:02:33 There's some other speaking styles if the default is boring. 14:02:35 ^style 14:02:35 Available: agora alice c64 ct darwin discworld europarl ff7 fisher ic irc* jargon lovecraft nethack pa speeches ss wp youtube 14:02:43 And also a couple of sub-interpreters in there. 14:02:48 ^bf .[,.]!hello 14:03:20 ^bf ,[.,]!hello, I mean to say 14:03:20 hello, I mean to say 14:04:36 I always manage to swap those two, since befunge's , is the output-character thing. 14:04:41 ^ul (and underload)S 14:04:41 and underload 14:16:30 !bf_txtgen Hewwo 14:16:41 74 ++++++++++++[>++++++>++++++++>++++++++++>+<<<<-]>.>+++++.>-..--------.>--. [415] 14:16:52 ^bf ++++++++++++[>++++++>++++++++>++++++++++>+<<<<-]>.>+++++.>-..--------.>--. 14:16:52 Hewwo. 14:45:09 !bf_txtgen hi pretty girl 14:45:12 136 +++++++++++++++[>+++++++>++>+++++++>+++++++<<<<-]>-.+.>++.>+++++++.++.>----.<++..+++++.<.>>++.++.<-------.<<+++.>----------------------. [160] 14:45:55 nice bot :-) 15:39:36 -!- alise has joined. 15:40:52 abcdefg 15:42:08 05:09:22 actually, I'm pretty sure that *every* language can be both (unless it's a language that can't be either - e.g. because it's impossible to implement on a computer)... it just depends on what people like to build for it (interpreters or compilers), how easy it is to implement the compiler, and how important it is to have programs in the language run fast 15:42:13 self-modifying is pretty hard to "compile" 15:43:39 05:40:11 yeah, thought that was the one that aardappel wrote 15:43:40 THE one? 15:43:47 Ho ho ho ho ho ho ho 15:43:59 http://strlen.com/proglang/index.html 15:44:10 He has probably created more languages than any other human being. 15:44:23 http://strlen.com/aardappel/index.html is his most esoteric non-FALSE one, probably. 15:44:32 self-modifying is only hard because people don't try hard enough! 15:45:22 I'm looking forward to your Befunge-98 compiler. No JITting. 15:45:31 but yes, you may need quite complicated analysis to undo the self-modification at compile-time, or you have to "compile" the code to its interpreter 15:45:42 It /could/ be done... but the code would be very slow, and probably hyperexponentially big relative to the code size. 15:45:52 The latter isn't really compiling 15:47:03 [["Desperate, Marooned Astronaut Tries To Use Every Item With Every Other Item"]] 15:52:21 Anything with a real eval()-type of thing is also hard to compile, at least if you consider including the compiler in the output. 15:58:12 maybe if you were able to do dataflow analysis to figure out which self-modifications are possible 15:59:29 but if the range of self-modifications is not bounded or you can't solve the halting problem, you'll probably have to include an interpreter anyway 16:01:14 or just make the compiled code generic enough :) 16:02:00 Don't suppose anyone wants to help me figure out how to do linear loop optimisation in my interp? :P 16:02:22 olsner_: with befunge you can do it by modifying the code as it runs 16:02:25 to contain whatever instruction there is 16:02:28 threaded code style 16:08:16 it's annoying that my architecture doesn't really support it well 16:16:58 -!- adam_d has joined. 16:19:24 I think I got a bit flamey on the RoboZZle forums 16:19:32 Bah, I hate existing proof assistants. 16:19:38 Stop avoiding computation! 16:19:50 Do computation all the time! Put trivial steps in for me! Do some god damn number crunching; you're a COMPUTER! 16:20:02 -!- Mathnerd314_ has joined. 16:20:05 Someone in android-dev suggested a Live Wallpaper of Befunge 16:20:43 Game of Life would be prettier. :p 16:20:46 Ooh. 16:20:50 A befunge GOL that works on its own playfield. 16:21:30 Already exists. 16:22:45 My dad doesn't want me using it on the bus, he's afraid someone will steal it 16:22:58 I don't _think_ that's likely on the busses I take, but still 16:23:01 -!- Mathnerd314 has quit (Ping timeout: 265 seconds). 16:23:15 -!- Mathnerd314_ has changed nick to Mathnerd314. 16:23:41 Not using a mobile thing while mobile sounds somehow point-missingy. 16:24:40 And I can't use it while going to my step-mother's apartment 16:24:57 Although one of the use-cases is when I got lost trying to get there 16:25:17 -!- diofeher has changed nick to diofeher__away. 16:25:39 alise: does "linear loop optimisation" refer to a common optimisation or is it something BF-specific? 16:26:17 Sgeo_: You know, you /are/ 20. 16:26:34 olsner_: basically a loop is linear if it has the same number of s, and all loops within it are linear 16:26:49 we can optimise /all/ of these into a constant time operation, well; apart from the ones that do input, dunno about output 16:27:27 because they all reduce to a sequence of "tape[tape_pointer + n] (+= or =) m;" where m can involve other tape[tape_pointer + n] cells of the tape, and also constants 16:27:50 http://mozaika.com.au/oleg/brainf/bff4.c the bits in #ifdef LNR do the optimisation in this (currently champion interpreter) 16:27:51 right, essentially that condition means that the loop returns to the same cell pointer as it was before the loop, after doing some transformation on a set of cells? 16:28:00 esotope-bfc also does it: http://code.google.com/p/esotope-bfc/ 16:28:01 olsner_: yes 16:30:13 bff4's code is not the clearest and esotope-bfc does much more advanced optimisations at the same time 16:30:17 so unfortunately there is no real source for this stuff 16:30:29 though http://lifthrasiir.jottit.com/esotope-bfc_comparison has a small bit of info 16:31:21 olsner_: mainly the issue with doing it with my interp is that I parse directly into the flat program array as I get input, so I can't do the optimisation after, for the result will be smaller than the previous loop 16:31:25 and so i have blank cells 16:31:38 bff4 seems to have very simple code for detecting the linear loop, although it's not clear at all what o[i].linear means 16:31:40 but i can't do it while parsing because my parser isn't clever enough for that sort of thing, I don't think 16:31:40 not sure 16:31:49 I think o[i].linear is set when it decides it's linear 16:31:56 *I think 16:32:06 " Today, I bought an app on my iPod touch that was $900 because I thought it was just a joke. Turns out it wasn't." 16:32:08 WTF? 16:32:25 Turns out you're an idiot. :-P 16:33:31 you also need to know the meaning of the "igo", "shift" and "off" fields 16:35:09 olsner_: yes, which i don't 16:35:17 i think the previous code does stuff to help linear loop optimisation 16:38:34 olsner_: I think parse() is my best bet for where to put the optimisation - but I'm not sure the way it's coded can handle it 16:38:38 http://pastie.org/925772.txt 16:38:43 because the recursive parser appends directly to the main instruction array as it goes along 16:39:50 hmm... I think with a fixed tape size, I could perfectly optimise BF 16:40:22 you could augment you internal instruction set to allow for "holes" where you've optimized away some stuff 16:40:32 to wit: detect infinite loops - possible because memory is small and bounded - and replace them with infinite loop instructions; run the rest of the program up to IO, embed the input/output instructions based on the transformations done to the tape 16:41:13 yes, but that means the interpreter would have to skip over dem holes 16:41:17 which would be slow 16:41:18 is that even turing complete? 16:41:26 bf with fixed tape size? 16:41:29 yeah 16:41:30 and fixed cell size? 16:41:31 of course not 16:41:35 I'm adding tape-growing later 16:41:52 I'd rather it somehow efficiently pooped the program into a separate buffer, and kept track of the running total of cell-moves 16:42:01 and if it's linear, spit out a linear instruction at the end, not a loop 16:42:08 ("poop" is a technical term) 16:43:44 just not sure how to do that without a lot of mallocing 16:43:58 ok, *d in op in bff4 is what contains the actual tape modifications 16:44:11 looks like it should be pretty straight-forward to iterate a loop to see if it's linear, provided you can buffer the whole loop body 16:44:11 hmm 16:44:17 it seems that every instruction is treat as the same "type" in bff4 16:44:22 well, mostly 16:44:23 like 16:44:33 > is "as if" just tapepointer+=1 16:44:37 but it could also be 16:44:42 tapepointer+=1, tape[tapepointer-4]=5 16:44:45 add delta, move cell-pointer, move ip by offset? 16:45:43 I'm not sure 16:45:48 http://mozaika.com.au/oleg/brainf/bff4.c consume() is the magic, I think 16:46:07 like, consume parses bf into one unified instruction set, such that loops literally just become a list of "primitive instructions" and keep track of whether they're linear 16:46:08 I think 16:46:15 so then we can just run the instructions in a /different manner/ if it's linear 16:46:17 but the same instructions 16:46:18 maybe? 16:46:19 this is just a guess 16:46:26 -!- diofeher__away has changed nick to diofeher. 16:47:18 this would seem to clash with my computed goto trick. 16:47:30 although i am tempted to try it with bf10, perhaps, and if it's faster that'll make linear loops in, say, bf11 easier 16:47:34 I'll give it a go 16:47:48 hmm 16:47:50 the issue is ordering 16:48:02 if you have mov=3, chg=-4, is it >>>---- or ---->>> or something else entirely? 16:48:26 depends on your definition :) just do it in the right order 16:48:56 right 16:49:03 so >->- would turn into, say, two instructions 16:49:09 mov=1 chg=-1, mov=1 chg=-1 16:49:23 obviously jumps should be done after this 16:49:28 as otherwise we'd have to retroactively modify instructions 16:49:49 typedef struct { 16:49:49 int mov; 16:49:50 int chg; 16:49:50 char io; 16:49:50 op *jmp; 16:49:50 } op; 16:50:08 yeah, since it modifies two cells it'd probably have to be two ops 16:50:13 io is 0 if , 1 if . 16:50:22 so given mov=i, chg=j, io=0, we have 16:50:29 >^i +^j , 16:50:30 so given mov=i, chg=j, io=1, we have 16:50:31 dunno how common that is, but you could also collapse into an array of mov's with a base offset 16:50:31 >^i +^j . 16:50:36 and if loop points somewhere, that's the... oh wait 16:50:41 we need to distinguish begin and end loops 16:50:51 char loop; 16:50:51 op *jmp; 16:50:54 loop = 0 is [ loop = 1 is ] 16:52:08 is [ doing the testing (comparing to zero) or is ] doing it? 16:52:10 nicely, this means that the condition is 16:52:10 if (tape[tp] == loop) 16:52:10 olsner_: too advanced operations break it though 16:52:10 since this is an interp 16:52:11 everything costs 16:52:11 hmm... 16:52:12 io should be 0 for no io, 1 for input, 2 for output 16:52:13 so I can just say if (io) 16:52:14 both 16:52:16 obviously 16:52:17 moar speed 16:52:20 (by a long way) 16:54:27 I notice you have two booleans in there - you could encode them in the jmp pointer's low bits to reduce the data size 16:56:00 except that io had three values, nm 16:56:01 yes 16:56:01 but i'd rather use megs of memory than have to do more operations 16:56:01 if (ip->loop && ((ip->loop-1 && !tape[tp]) || tape[tp])) { 16:56:01 ip = ip->jmp; 16:56:01 } 16:56:02 my eyes, they burn 16:56:11 it would sure be nice if that code was comprehensible 16:56:19 comex: bff4? 16:56:20 or mine 16:56:21 bff4, I mean 16:56:23 yeah 16:56:33 it's impressive code but clearly written by a lone genius :-) 16:56:39 if you're into brainfuck, readability obviously isn't a big concern 16:56:48 lol 16:56:53 oh, I have nothing to encode stop 16:56:57 never mind, I'll stuff it into loop 16:56:59 That !tape[tp] is redundant 16:57:09 no i won't 16:57:13 i'll stuff it into jmp 16:57:15 no i won't 16:57:22 Deewiant: No it isn't? 16:57:26 Oh, yes it is. 16:57:29 Er... oops. 16:57:30 It should be 16:57:40 encode it into jump, catch the SEGV and see if it crashed due to a null pointer, stop normally 16:58:00 except sometimes it's a SIGBUS :p 16:58:01 it's pretty exceptional to see the program stop anyway, happens at most once 16:58:10 Deewiant: I'm trying to say if ip->loop: if ip->loop is 2 and not tape tp, jump; if ip->loop is 3 and tape tp, jump. 16:58:12 Without using ==. 16:58:14 For no particular reason. 16:58:26 * comex doesn't see the point of collapsing linear loops, except for ones that copy one cell into another 16:59:19 comex: because instead of looping a lot and doings lots of jumps for a simple task, you perform, say, one assignment; or one increment/decrement, for each cell modified 16:59:23 and avoid doing costly pointer movement 16:59:37 http://mozaika.com.au/oleg/brainf/ ;; just look how much faster bff4lnr is to bff4 17:00:14 haha, except when compiled with cygwin gcc 17:01:40 nobody cares about cygwin :) 17:02:43 * alise considers io as a jump action 17:02:44 erm 17:02:47 * alise considers stopping as an io action 17:03:20 io is becoming a general trap instruction for native-calls 17:03:27 yep! 17:03:37 actually it's just because it saves me one more branch most of the time 17:04:04 if (io) goto trap[io]; 17:04:13 lol 17:04:24 I've written interp, that was the easy part; now I need to rewrite parse 17:04:27 I bet this ends up slower 17:04:42 how significant is parsing time really? 17:04:47 for your benchmarks, I mean 17:05:47 -!- olsner_ has changed nick to olsner. 17:06:02 benchmark, singular 17:06:05 well 17:06:10 parsing is the least optimised part by far 17:06:13 it has many, many branches 17:06:15 so dunno 17:06:19 probably not that much 17:07:30 I should complete my compiler-for-sensible-imperative-language and then start working on one of these of my own, it seems like fun 17:08:11 it is, optimising interpreters are wonderful 17:08:16 optimising compilers are easy since you can take as long as you want 17:08:45 optimizing compilers are even easier if you can reuse existing awesome optimizers (e.g. LLVM) 17:10:52 -!- BeholdMyGlory has quit (Remote host closed the connection). 17:10:57 I wonder what I need in terms of abstraction to be able to produce some parser combinator library inside my language 17:13:00 Monads or at least applicative functors help a lot. 17:13:37 $ ./bf10 17:13:37 Segmentation fault 17:13:38 yeah, and that pretty much requires some kind of polymorphism and fancy type-system machinery 17:13:38 It is a start. 17:13:58 Not "fancy". 17:14:10 Even a simple dependently-typed language can be typechecked in about 100 lines of code. 17:14:22 Hindley-Milner is actually pretty simple if you don't care about lovely errors and the like. 17:14:37 olsner: Besides, you don't need polymorphism to make a monad. 17:14:43 Just rebind >>= in every monad definition. 17:14:46 And have no general monad class. 17:14:49 You don't need it for parser combinators. 17:15:01 true that 17:15:43 Well, bf10 no longer segfaults; instead, it sits and does nothing. 17:16:09 I was planning to add some kind of type inference after taking care of the fiddly details of getting something parsing, compiling and running at all 17:16:29 A bad idea: it's rather fundamental. 17:16:47 too bad I don't know how it works then 17:16:53 -!- BeholdMyGlory has joined. 17:17:03 Well, look it up. :-) 17:17:27 augustss has done a really simple typechecker; but for dependently-typed lambda calculus. 17:17:51 I assume you don't want to go down that theoretical road: all programs terminate (and so sub-TC, although only barely), general inferrence isn't always possible, etc. 17:18:47 no, probably not :) 17:20:51 (gdb) print tape 17:20:51 $7 = "\fF\000\000/F�", '\0' 17:20:53 Helpful. 17:21:15 I must have my ordering wrong or something. Sigh. 17:22:41 * Sgeo_ had a dream where the iPad fit in his pocket, which drove me to get an iPad 17:22:43 Strange... 17:23:50 Sgeo_: lol 17:24:02 Is that an iPad in your pockets or are you just happy to see me? 17:24:05 *pocket 17:24:25 -!- augur has quit (Remote host closed the connection). 17:24:39 Grr... this should be working. 17:24:44 no, really 17:24:52 Really, no? 17:25:17 alise: give me an example of a program other than [->+<] that is usefully optimized by linear stuff 17:25:28 * alise realises that {,} A := A x A -> Bool doesn't work; it could return Bool for more than two pairs. 17:25:42 And we can't use exists because it would identify the two pairs and thus give them an ordering of sorts. 17:25:52 comex: examine the output of LostKng.b from esotope sometime 17:25:57 it has like 90% fewer loops than the original 17:26:32 but, an example 17:26:34 please :p 17:26:52 half the loops in every program. 17:27:00 i'm a shitty bf coder so i cannot give you one off hand 17:27:05 comex: oh, for one, every constant-generating code 17:27:11 like from the bf constants page 17:27:15 or generating a constant string 17:27:20 those are just simple though... 17:27:21 there are far more 17:27:30 basically anything you'd write in an imperative lang manipulating your variables that /isn't/ a loop 17:27:32 becomes a linear loop in bf 17:29:08 hm 17:29:13 that sounds hard to optimize 17:29:17 it's not actually 17:29:23 check that there are the same number of > and then, you can do it like this: "run" it, keep track of the current pointer, then spit out p[here + offset] (relative to the starting position using the keeping-trackness), then accumulate the increments and decrements it does to it 17:29:52 and spit out that as a += 17:30:00 (plus optionally optimise for = x like with [-]) 17:30:08 then you can rearrange it to sift out multiple changes of the same cell 17:30:19 http://mazonka.com/brainf/bff4.c does it, the bits in #ifdef LNR 17:30:51 hey guys, the cardinality of (list of A) is -(|A|/(|A|-1)), where |A| is the cardinalit yof A 17:30:59 *cardinality of 17:31:18 so, for instance, a list of booleans has cardinality -2 17:33:49 -!- ze_german has quit (Ping timeout: 252 seconds). 17:38:04 I refuse to give up until bfN is faster than bff4! 17:38:52 ok, at one point my ip stops changing 17:38:54 queer 17:38:56 oh 17:38:59 it is mixing input and output 17:39:25 olsner: yes this method is a lot slower 17:39:28 but maybe i can optimise out the jumps 17:39:59 if (ip->loop && ((ip->loop == 2 && !tape[tp]) || (ip->loop == 1 && tape[tp]))) { 17:40:02 This, I am sure, could be improved. 17:41:55 Oh, heh; I just forgot to use -O3. 17:42:05 It is still slower, though. 17:44:21 hmm, '[' is jump-if-false to (after) the end of the loop while the ']' is jump-if-true to the beginning? 17:44:36 to after the beginning 17:44:40 right 17:44:44 one less increment; significant difference in speed actually 17:44:48 small but extant 17:45:17 * alise has an idea 17:45:20 meh 17:45:22 this thing is slower 17:45:28 I should just chuck it out and face the linear loop monster 17:45:52 now what... 17:46:00 I still don't know how I should do linear loops 17:46:19 Oh, maybe I should allocate, in parse's scope, the output array. 17:46:27 But I return the /end/, and I'd need to return the beginning then. 17:46:34 And merging it in would be slow.. 17:46:40 *slow... 17:50:10 olsner: Really I should probably do some microoptimisation to satisfy me before tackling the big stuff ;-) 17:51:31 how does the assembly look? :) 17:51:51 olsner: With gcc? Bad; it decides that all my threaded-loveliness NEXT should get put into one place and jmps to it. With clang? Super-duper-excellent. 17:51:59 I don't have clang though so I'm testing with gcc. 17:52:06 It's still fast mind. 17:59:06 -!- lament has joined. 17:59:36 I've got some falling.. erm, homework to do! 18:01:12 Bye all! Homework now 18:02:02 -!- Sgeo_ has quit (Quit: Leaving). 18:03:25 olsner: I tell you what: you make my BF interp do linear loops, and I'll make the best proof assistant ever. 18:03:26 Deal! :P 18:19:48 * alise formalises mazes in Coq; decides that he's too lazy to write a solver 18:49:37 -!- adam_d has quit (Ping timeout: 246 seconds). 18:50:05 You know... someday soon, we'll start saying "What version of TeX are you using?" "Ï€." 18:50:06 When Knuth dies. 18:53:46 Knuth can't die; he has a book to finish 18:58:14 He'll finish it eventually, though. 18:58:42 -!- coppro has quit (Quit: I am leaving. You are about to explode.). 19:00:09 "It’s soooo ugly seeing variable names in formulas written without \text{} or \mbox{}… I think it’s one of the most common mistakes…" 19:00:10 FAIL 19:01:00 Why the fuck don't \alpha et al. work outside math mode? Ugh. 19:07:40 -!- augur has joined. 19:19:32 -!- adam_d has joined. 19:24:08 -!- ze_german has joined. 19:32:48 -!- augur has quit (Remote host closed the connection). 19:32:53 -!- augur has joined. 19:32:57 -!- oerjan has joined. 19:33:08 -!- adam_d has quit (Ping timeout: 240 seconds). 20:01:11 -!- diofeher has changed nick to diofeher__away. 20:05:46 Inductive classically : Prop -> Prop := 20:05:46 | neg : forall P, ~~P -> classically P 20:05:46 | foa : forall A, forall P : A -> Prop, (forall x, classically (P x)) -> classically (forall x, P x) 20:05:46 | exi : forall A, forall P : A -> Prop, classically (~(forall x, ~ P x)) -> classically (exists x, P x). 20:05:46 this is deficient :( 20:05:56 because you can't prove e.g. classically (~exists P, P /\ ~P) 20:10:29 I fucking hate strictly positive 20:10:41 hmm? 20:14:27 rule of type theory 20:14:29 -!- Tritonio_GR has quit (Ping timeout: 276 seconds). 20:14:42 it's basically a restriction on recursive types to keep the type system sound 20:14:49 but it means you can't do some stuff that's innocent... like 20:15:01 | foo : forall P, ~(classically P) -> classically (~P) 20:15:12 because classically appears as an "input not result", so to speak, of the argument 20:15:16 so it can't guarantee that you won't do 20:15:21 foo (foo (foo (foo (foo (foo (... 20:15:22 or whatever 20:18:47 -!- augur has quit (Ping timeout: 268 seconds). 20:31:00 -!- adam_d has joined. 20:45:40 -!- augur has joined. 20:48:06 -!- tombom__ has joined. 20:49:38 -!- tombom has joined. 20:51:20 -!- augur has quit (Ping timeout: 245 seconds). 20:51:28 -!- tombom_ has quit (Ping timeout: 258 seconds). 20:52:35 -!- tombom__ has quit (Ping timeout: 245 seconds). 21:06:21 -!- diofeher__away has changed nick to diofeher. 21:12:34 -!- alise has quit (Ping timeout: 258 seconds). 21:19:43 -!- oerjan has quit (Quit: leaving). 21:32:57 -!- oerjan has joined. 21:34:53 -!- alise has joined. 21:38:40 -!- Oranjer has joined. 21:41:19 -!- Alex3012 has quit (Ping timeout: 245 seconds). 22:02:18 -!- adam_d has quit (Ping timeout: 265 seconds). 22:15:00 -!- MigoMipo has quit (Quit: When two people dream the same dream, it ceases to be an illusion. KVIrc 3.4.2 Shiny http://www.kvirc.net). 22:15:58 -!- Tritonio_GR has joined. 22:16:30 -!- kar8nga has quit (Remote host closed the connection). 22:30:19 -!- coppro has joined. 22:36:18 -!- coppro has quit (Ping timeout: 268 seconds). 22:38:25 magic as viewed through a spying lens 22:41:30 -!- Rugxulo has joined. 22:43:47 "Ben also made a true x86 version of False (by translating the 68k code), but I lost the code (silly me)." ... well that's annoying :-/ 22:46:31 -!- diofeher has quit (Quit: ChatZilla 0.9.86 [Firefox 3.5.9/20100401213457]). 22:48:16 Rugxulo: It's easy enough to implement. Make sure to completely emulate 68k for `! 22:48:47 * alise appears to be determined, now, to create a better theorem prover than the one experts have worked on since the 1980s, Coq. 22:48:59 Well, you can't accuse me of being unambitious. 22:53:56 http://mozaika.com.au/oleg/brainf/ 22:54:04 Yes. 22:54:10 I happen to be attempting to beat that implementation. 22:54:16 In fact, I already beat it on the test programs used there. 22:54:22 I did "runtime bff4lnr < long.b" after gcc -s -O2 -fomit-frame-pointer -mtune=generic -DLNR 22:54:28 11.26 secs. 22:54:51 *but* ... bfd long.b && runtime long.com is 4.12 secs. 22:54:59 -!- Asztal has quit (Ping timeout: 265 seconds). 22:55:01 -!- Azstal has joined. 22:55:06 -!- Azstal has changed nick to Asztal. 22:55:22 Rugxulo: bfd? 22:55:25 http://home.arcor.de/partusch/html_en/bfd.html 22:55:35 if it's not written in C it doesn't really count; is it? 22:55:37 799-byte compiler (DOS .COM) 22:55:40 Rugxulo: that's a compiler, duh 22:55:42 that doesn't count 22:55:46 we're comparing interpreters 22:55:52 you have to count how long the compiler takes if you want to be fair 22:55:54 -!- calamari has joined. 22:56:04 < 1 sec. to compile 22:56:06 since bff4 does compiler-esque optimisations at run time 22:56:18 Rugxulo: shrug - bff4 works on the machines I have; bfd works on none 22:56:30 no x86 cpus? 22:56:43 no windows 22:56:57 DOSBox, DOSEMU, FreeDOS, etc. 22:57:05 that's not "working" 22:57:08 I'm not discounting your efforts! 22:57:11 I can run Amiga binaries on this machine too 22:57:13 :-) 22:57:18 Rugxulo: never said you were, of course 22:57:19 I just wanted you to know (for speed comparison) how BFD fares (i.e. not bad) 22:57:24 just saying that bfd doesn't "really" beat bff4 :P 22:57:34 yeah compiling brainfuck wins over an interp, every time 22:57:40 Rugxulo: Have you seen esotope? 22:57:48 http://code.google.com/p/esotope-bfc/ 22:57:50 It compiles hello world to a single print statement. 22:57:50 no 22:57:56 the most advanced BF compiler out there 22:58:10 does some "proper" -- as in stuff actual compilers do -- optimisations 22:58:10 detects arithmetic, while loops, optimises away tons of loops, ... 22:58:13 I've seen the site before, but since it required Python, I didn't really test it 22:58:26 why not? 22:58:43 -!- tombom has quit (Quit: Leaving). 22:59:08 well, for one, I'm no huge Brainf*** nut (haven't practiced enough, honestly) 22:59:13 it's easy to get python on windows. 22:59:19 *Brainfuck 22:59:27 or, I guess, to be absolutely pedantic, *brainfuck 22:59:30 Python 2.4.2 is available for DOS (via DJGPP), but esotope claims to need 2.5 22:59:38 Err... so get the Windows version. 22:59:50 I don't like Windows that much, honestly 23:00:00 or Python, too bloated and weird for me ;-) 23:00:02 What OS are you using right now? 23:00:12 Windows ... I use it, but don't like it ;-)) 23:00:15 http://python.org/ftp/python/2.6.5/python-2.6.5.msi 23:00:18 Esotope your brain. 23:00:25 Anyway, less brainfuck, more theorem proving. 23:03:17 Rugxulo: I said more theorem proving. 23:03:32 ? 23:04:32 I was ordering you to talk about computerised theorem provers, along the lines of Mizar, Coq, HOL, Isabelle, etc. I did not actually expect you to follow this instruction. 23:04:49 me too dumb ;-) 23:05:08 Curry Howard Isomorphism: propositions <--> types; proofs <--> programs. 23:05:22 Computerised theorem provers mostly exploit this fact to automate, and formally verify, proofs of mathematical theorems. 23:05:25 The end. 23:07:31 Any questions? 23:08:04 yes ... where's the beef? 23:08:22 and why is the Smoke Monster such a meanie? 23:08:41 Rugxulo: The beef is in the proof's computational content. 23:08:45 And because REDACTED 23:19:03 probably bronchitis 23:19:30 * alise attempts to do some category theory-as-in-real-category-theory in Coq 23:19:35 it's hard, a lot of the concepts don't really map 23:19:47 I am having trouble even stating what a morphism is :-) 23:19:59 and when your categories don't map, then you know you _really_ have trouble 23:20:18 RIMSHOT 23:20:59 I /think/ a morphism is just a function arrow, more or less 23:21:00 except 23:21:06 then what is hom(C) 23:21:46 Or are morphisms arrows? 23:22:02 morphism = arrow, they are synonyms 23:22:09 (not haskell Arrow) 23:22:20 Right, I meant haskell Arrow. 23:22:31 I'm just wondering if I should represent morphisms as http://www.haskell.org/arrows/. 23:22:43 I guess not. 23:22:44 Arrow has additional properties in addition to being morphisms 23:22:48 Right. 23:23:07 Well, it isn't "hom : ob -> ob", it's more like "hom = ob -> ob". 23:23:28 morphisms are the fundamental concepts of a category. objects are optional 23:23:40 Then what do morphisms map if not objects? 23:23:51 -!- Sgeo has joined. 23:23:53 morphisms don't map. morphisms compose 23:24:09 it's just some categories where morphisms are maps 23:24:14 Then I am really confused. 23:24:23 alise, confused? 23:24:27 Yes. 23:24:48 well that no objects version is just a reformulation and is not really intuitive 23:25:05 (basically you identify objects with their identity morphisms) 23:25:05 Unintuitive, maybe, but is it mathematically useful? 23:25:34 -!- cheater2 has quit (Ping timeout: 276 seconds). 23:26:04 Sounds nice. useful? 23:26:08 it has fewer fundamental concepts 23:26:17 gah, Evony ads on Sourceforge, have they no shame?? 23:26:25 (although it wasn't one of the dirty ones, heh) 23:26:40 so might be easier to encode 23:26:45 oerjan: but is it easy to define objects with that and then formalise the rest conventionally? 23:27:16 as i said, objects are just identity morphisms 23:27:21 right. 23:27:27 still have to figure out how to encode a morphism :) 23:27:32 an identity morphism would be a morphism that composes with itself, giving itself 23:27:33 "malware free ... Google verified" ... yeah right 23:27:37 oh wait 23:27:59 that's not quite enough to identify it 23:28:11 hey guys 23:28:11 (could have other idempotents) 23:28:24 here's a single-axiom logical system 23:28:25 (((p -> q -> r) -> (p -> q) -> p -> r) -> (s -> t -> s) -> u) -> u 23:28:28 (it's the type of iota) 23:28:53 iota? the language? 23:28:56 yeah 23:29:02 the single combinator has that type 23:29:18 ((p -> q -> r) -> (p -> q) -> p -> r) is a bit of a strange argument to take. since it's the same as true :P 23:29:59 in fact that type is positively bizarre... 23:30:17 :: (((t -> t1 -> t2) -> (t -> t1) -> t -> t2) 23:30:17 -> (t11 -> t12 -> t11) 23:30:17 -> t21) 23:30:17 -> t21 23:30:19 maybe I translated it wrong 23:30:22 it looks like it cannot possibly use anything but its last argument 23:31:10 Prelude> let k x y = x 23:31:10 Prelude> let s x y z = x z (y z) 23:31:10 Prelude> let iota x = x s k 23:31:17 maybe iota doesn't work outside of the world of untypedness 23:31:54 yeah it doesn't work 23:31:57 iota iota isn't well typed 23:32:01 given that you need to self-apply it to get anything useful 23:32:06 and all applications involve iota iota at some point 23:32:13 apparently ^x.xKSK is ok though 23:33:53 oerjan: so what is one morphism, type-wise? 23:33:58 a function from A to A? a function from A to B? 23:34:09 i mean i get the concepts 23:34:13 just not how they gel with type theory 23:34:18 ouch 23:35:13 what :D 23:35:36 i cannot wrap my head around the idea of embedding fully general morphisms into type theory 23:35:53 hmm... perhaps hom(C) is a list of (ob(C),ob(C)) that is the length of ob(C)^ob(C) 23:35:58 which is a function, but 23:36:03 then the object X would be (X,X) in the list 23:36:15 so 23:36:20 hom(C) is ob(C) -> ob(C) 23:36:25 which makes one morphism (ob(C), ob(C)) 23:36:29 oerjan: or is it more general in this case? 23:36:50 now mind I can't dispense with objects, since I say that ob(C) is a type 23:36:55 which I need to define hom 23:38:01 i tend to think of hom as something taking _two_ arguments, both objects 23:38:12 hom(A,B) being the set of morphisms from A to B 23:38:24 yes 23:38:33 but how is that distinct from hom(A,B) being one single function A->B? 23:38:50 a morphism is just a pairing of two objects, (A,B) and hom(A,B) pairs every object in A with one from B; so it's A->B 23:39:11 er for the category Set, the _elements_ of hom(A,B) are functions A->B 23:39:23 i'm talking about an interpretation of category theory 23:39:28 i'm attempting to implement category theory 23:39:51 oerjan: so, using Haskell, would you agree that this definition is reasonable?: 23:39:55 type Hom a b = a -> b 23:40:02 where hom(A,B) is translated into Hom a b 23:40:48 but then what of hom_C(A,B) 23:40:51 for the category Hask, sure 23:41:12 * Sgeo wonders how LiveFunge-93 would work 23:41:19 oerjan: but I'm trying to implement /every/ category 23:41:25 i'm trying to implement category theory itself 23:41:28 Display the code on its side in normal portrait mode? 23:41:41 Sgeo: or in normal mode 23:41:43 not flipped 23:41:45 who knows 23:41:56 and i have no idea whether what you are trying to do even makes sense 23:42:00 I don't want to squish the characters though 23:43:08 Are people here willing to make nice-looking Befunge-93 programs for it if I make it? 23:43:30 -!- ze_german has quit. 23:43:44 Sgeo: maybe. 23:43:51 oerjan: well, category theory is a mathematical discipline. agreed? 23:44:07 it has abstract objects, called categories; relations between these objects, definitions about them, theorems relating to them, etc. 23:44:08 agreed? 23:45:14 oerjan: you seem hesitant :) 23:45:34 yes. i don't see that it makes sense to identify morphisms between two objects with a type 23:46:11 well a functional type 23:46:23 oerjan: but, let's go incrementally here 23:46:26 do you agree with my two assertions? 23:47:50 Sgeo, eh? I missed it, what are you making? 23:48:12 Live Wallpaper for Android 2.1 showing Befunge-93 being interpreted.. 23:48:13 Maybe 23:48:16 If I have time 23:48:18 leave me out of this 23:48:25 And don't start making a RoboZZle for Android app 23:48:35 oerjan: :P 23:48:44 YOU ARE OBLIGATED TO RESPOND 23:49:10 Sgeo, the only "pretty" Befunge program (output) is my "guesswho2.bef" ;-) 23:49:16 well, or maybe mandel.bf 23:49:23 It wouldn't show output 23:49:27 everything else is pretty banal 23:49:33 It would show Fungespace, or wahtever it's called in 93 23:49:37 then what's the point? just to see it trace 23:49:38 ? 23:49:48 Why not? 23:49:58 that's fine, just a bit mind-numbing ;-) 23:50:03 Ooh! Different threads are different.. wait, that's 98 23:50:40 CPU usage might be a bit extreme for a live wallpaper :/ 23:52:47 just run at low priority 23:56:32 -!- oerjan has quit (Quit: leaving). 23:57:59 http://www.expertrating.com/jobs/Programming-jobs/Befunge-Programmer-jobs.asp