00:07:46 [ `:11111111112779092073579732177590915891200000000000x 00:07:47 b_jonas: `:11111111112779092073579732177590915891200000000000x 00:07:52 [ q:11111111112779092073579732177590915891200000000000x NB. int-e 00:07:53 b_jonas: 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 5 5 5 5 5 5 5 5 5 5 5 7 7 7 7 7 7 7 11 11 11 11 11 11 13 13 13 13 13 13 13 13 13 17 17 19 23 23 23 00:10:15 -!- nfd9001 has quit (Read error: Connection reset by peer). 00:12:01 -!- salpynx has joined. 00:13:24 int-e: ^ that's the best if you use primes no greater than 23 00:13:34 I'm running a longer search now 00:14:00 well, as soon as I fix the bugs in my program 00:14:57 [ q:11111111111269581656547160489766631945078430800000x 00:14:57 b_jonas: 2 2 2 2 2 2 2 3 3 3 3 3 5 5 5 5 5 7 7 7 7 7 7 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 13 13 13 13 13 13 13 13 19 19 19 19 19 19 19 29 00:16:44 int-e: may I ask how good a number you've got, that is, (the the best number you have minus (2**50-1)/9) rounded to two significant digits? 00:17:35 wait, what's the competition? 00:18:23 Hooloovo0: http://esolangs.org/logs/2019-08-02.html#lY 00:20:39 dammit I missed a discussion of TI-8x grayscale 00:21:20 ha, sorry 00:22:32 the display (or at least the controller) doesn't suport any more than black and white, you have to flicker the pixels fast enough that you trick the eye into seeing gray 00:23:40 I too wrote a mandelbrot renderer in BASIC and it took a similarly long amount of time to render 00:23:58 I wonder if you could flash like 8 pictures in BASIC to get flickerless grayscale... 00:24:21 (well, relatively speaking) 00:25:33 also sdcc for the z80 isn't super good 00:26:08 the only compiled language which is half-decent forTI-z80 is AXE 00:29:29 oh come on, stupid program, find a better solution 00:31:10 [ 11111111111269581656547160489766631945078430800000 - (9<.@%~_1+10x^50) 00:31:11 b_jonas: 1.58473e38 00:31:14 Hooloovo0: it wasn't flickerless 00:32:43 [ 0j_2": 11111111111269581656547160489766631945078430800000x - (9<.@%~_1+10x^50) 00:32:43 b_jonas: 1.58e38 00:33:23 ok, I think I'll leave this running for a while 00:35:59 [ q:11111111111161923559652900718659162521362304687500x 00:36:00 b_jonas: 2 2 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 7 7 7 7 7 7 11 13 13 13 17 17 17 17 17 17 23 31 59 00:36:21 [ 0j_2": 11111111111161923559652900718659162521362304687500x - (9<.@%~_1+10x^50) 00:36:22 b_jonas: 5.08e37 00:41:48 -!- douglas_ has joined. 00:42:53 -!- douglas_ has quit (Remote host closed the connection). 01:37:46 -!- FreeFull has quit. 01:54:26 -!- xkapastel has quit (Quit: Connection closed for inactivity). 01:57:31 [ q:11111111111111167179461296463398102111816406250000x 01:57:32 b_jonas: 2 2 2 2 3 3 3 3 3 3 3 3 3 3 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 11 11 11 11 13 13 13 13 13 17 19 19 23 23 23 23 23 23 23 23 53 61 73 01:57:35 -!- oerjan has joined. 01:57:55 [ 0j2": 11111111111111167179461296463398102111816406250000x - (9<.@%~_1+10x^50) NB. int-e 01:57:56 b_jonas: 56068350185352286991000705295138889.00 01:58:06 [ 0j_2": 11111111111111167179461296463398102111816406250000x - (9<.@%~_1+10x^50) 01:58:07 b_jonas: 5.61e34 02:37:11 -!- rodgort has quit (Quit: Leaving). 02:46:42 -!- rodgort has joined. 02:49:42 -!- oerjan has quit (Quit: leaving). 02:57:30 -!- rodgort has quit (Ping timeout: 272 seconds). 02:58:28 -!- rodgort has joined. 03:16:29 what kind of optimization algorithm are you using b_jonas? 03:21:48 [[Adjudicated Blind Collaborative Design Esolang Factory]] M https://esolangs.org/w/index.php?diff=65368&oldid=65364 * A * (+18) At least this is created in 2006. 04:37:59 [[Adjudicated Blind Collaborative Design Esolang Factory]] M https://esolangs.org/w/index.php?diff=65369&oldid=65368 * Salpynx * (-18) Undo revision 65368 by [[Special:Contributions/A|A]] ([[User talk:A|talk]]) Yes, but [[Category:Years]] "All languages should belong to exactly one of these categories, and other articles should not." 06:14:03 b_jonas: that's pretty good 06:14:19 b_jonas: 11111111111111111400018389711831195436675393750000 is the best I've got 06:14:58 What are these numbers? 06:15:20 103-smooth overapproximations of 11111111111111111111111111111111111111111111111111 06:17:13 What is that? 06:17:27 A four-letter word. 06:17:29 I'm looking for clues in the text and not seeing them. 06:17:49 @google "B-smooth" 06:17:50 https://en.wikipedia.org/wiki/Smooth_number 06:18:00 Aha. 06:18:19 Why would you use the word "smooth" to talk about natural numbers? Come on. 06:18:30 * kmc is a smooth operator 06:18:53 shachaf: because we also use the word "round" for natural numbers 06:19:23 i,i kmc^op 06:19:25 and round numbers tend to be smooth 06:19:31 you gotta tell me if you're a c^op 06:19:51 is it opposite day 06:19:57 > 2^5*3^2*5^3*7 06:19:59 no hth 06:20:00 252000 06:20:42 > 2^15*3^10*7^7 -- smooth, not round 06:20:47 1593487871410176 06:22:11 it's not round in base 10 because it doesn't have any 5s 06:22:13 b_jonas: oh, actually 11111111111111111215673466122483207680856045806875 is the best I have. 06:22:52 smooth buddy 06:43:32 I wonder whether an ILP solver would be good at this kind of thing. 06:44:06 I guess probably not very? 06:50:45 I looked into approximate subset sum solvers but they mostly suck 06:50:52 well, all suck 07:08:59 Girl Genius theory: the entire thing's going to be a shaggy dog story, with Agatha, Gil, and Tarvek being unable to claim their titles due to having died in Castle Heterodyne during the Si Vales Valeo procedure 07:09:39 branch and bound is tg 07:09:47 can sat solvers use a trick like that somehow 07:15:48 -!- john_metcalf has joined. 07:22:30 shachaf: Uh, they are? 07:23:00 You only have two values. You cut off branches that are definitely false. 07:23:25 Unit propagation is a combined branch & bound. 07:23:39 It's all so degenerated though that the concept hardly fits. 07:26:47 I guess... 07:26:59 Do SMT solvers use it more directly? 07:27:08 Maybe when used for optimization rather than satisfiability. 07:27:21 Maybe that's pretty far from SAT territory. 07:29:56 I don't know whether the LIA solvers prefer gomory cuts or branch&bound. 07:30:20 (LIA = linear integer arithmetic) 07:31:22 Hmm, maybe lookahead SAT solvers are a bit closer to the kind of thing I was thinking of. 07:31:40 It's not really a bound, though. 07:34:10 [ 0j_2": (9<.@%~_1+10x^50) -~ 11111111111111111400018389711831195436675393750000 07:34:11 b_jonas: 1.30e33 07:34:13 afaiui lookahead is just another heuristic for selecting promising decisions. 07:34:22 nice 07:34:34 [ 0j_2": (9<.@%~_1+10x^50) -~ 11111111111111111215673466122483207680856045806875x 07:34:35 b_jonas: 1.05e32 07:34:37 even better 07:34:43 [ q: 11111111111111111215673466122483207680856045806875x 07:34:43 b_jonas: 3 3 3 3 3 5 5 5 5 7 7 7 7 7 11 11 11 11 13 13 13 13 13 13 13 13 17 19 31 37 37 37 37 43 47 47 47 67 67 97 97 103 07:34:58 Yes, it's a different thing. 07:35:19 I'll try to run this search with different parameters on a faster machine 07:37:47 shachaf: http://esolangs.org/logs/2019-08-02.html#lY 07:38:55 Aha. 07:40:57 b_jonas: http://paste.debian.net/1094920/ has some more results and runtime on a i7-6850K (3.6GHz, single-threaded, written in Haskell, exact arbitrary precision integer arithmetic) 07:42:06 int-e: I don't want to look at spoilers yet 07:42:15 b_jonas: it's only numbers and runtimes 07:43:15 (And spoils the fact that I have tried two different approaches... I don't really think that's a spoiler :) ) 07:51:08 -!- b_jonas has quit (Quit: leaving). 08:12:42 -!- Lord_of_Life has quit (Ping timeout: 244 seconds). 08:14:07 -!- Lord_of_Life has joined. 08:14:24 -!- Phantom__Hoover has joined. 08:26:35 Is "rep ret" necessary only when the ret is on an even address? 08:26:43 Also is it necessary at all nowadays or only for old AMD CPUs? 08:26:52 Or maybe I mean odd. Whichever one I mean. 08:29:11 "Continuing in the following generation of AMD CPUs, Bulldozer, we see that any advice regarding ret has disappeared from the optimization guide." 08:29:23 according to http://repzret.org/p/repzret/ 08:29:50 Probably not. 08:29:55 Obviously the AMD manuals are the authorative source. 08:31:15 Hmm, I generally only look at the Intel manuals. I guess I should read the AMD ones too. 08:31:23 (Not that I look at the optimization guide hardly ever.) 08:31:45 when are people going to scrap x86 twh hand 08:32:00 -!- cpressey has joined. 08:32:20 maybe when Intel goes out of business. 08:33:39 I imagine Intel could do pretty well at other architectures if it came to it? 08:34:19 but why would they switch away from x86 08:35:00 I think "the world switching away from x86" is more likely than "Intel going out of business" 08:35:13 Though maybe not. Companies can be ephemeral. 08:35:23 I'll believe it when it happens. 08:35:44 The world's most popular operating system is already almost exclusively ARM. 08:38:05 \any architecture with LLVM support is viable these days. 08:38:27 So the popularity of ARM is still no reason for Intel to switch away from x86. 08:38:31 Uh oh. 08:38:39 If I write a compiler should I target LLVM? 08:38:52 probably 08:39:51 Hmm, there were a few things where I wasn't sure LLVM could really do the things I want. 08:39:54 But maybe it's feasible. 08:40:10 x86 is still huge for gaming 08:41:54 Are there any standards like calling conventions or whatever for software that wants to be sure to avoid stack overflow? 08:42:04 For example, a pointer to the end of the stack that it can check. 08:43:31 guard pages? 08:44:41 Presumably programs would like to fail better than a SEGV. 08:45:10 Meh you're so hard to please. 08:45:36 For example to guarantee success before starting a computation rather than crashing in the middle. 08:45:42 This seems pretty basic. 08:45:58 Recursion is the only case where you might need a dynamic check. 08:48:59 Also, are there any clues for why the REX bits are called W R X B? 08:48:59 shachaf: You want to be able to call code from external libraries and you want to be sure to avoid stack overflow? 08:49:08 -!- wob_jonas has joined. 08:49:13 `ping 08:49:14 pong 08:49:28 cpressey: Ideally I'd like this to work across library boundaries, yes. 08:51:27 shachaf: You do seem to be asking a lot 08:53:31 shachaf: I don't know about the ret instructions specifically, but you should look them up in the optimization manuals for your target cpu at "https://software.intel.com/en-us/articles/intel-sdm" and AMD, and in Agner's optimization manuals at "http://www.agner.org/optimize/" if you care 08:54:17 cpressey: I guess my wisdom entry is correct tonight. 08:54:20 shachaf: OK, so I have this computation, and in the middle it loads a shared object and calls the symbol `foo` in it. You want to guarantee this will not overflow the stack. You want this guarantee *before starting it*. 08:54:41 That's all I mean by asking a lot 08:54:49 Oh, I see. 08:55:07 cpressey: If all your functions are non-recursive and non-indirect, this can just be in the type of foo. 08:55:27 I guess it's a problem with shared libraries but shared libraries aren't so great in the first place. 08:55:39 If you have control over foo and access to information about it in the compiler, just track the stack size in the compiler, you don't need dynamic checks 08:55:52 If you don't have control over foo, all bets are off 08:56:01 You need something like dynamic checks if you want to support recursion. 08:56:38 OK so you have a general recursive function and you want a guarantee *before calling it* that it will terminate 08:56:38 Besides, you at least need a dynamic check at program startup or something. 08:56:49 No, I want it to be able to fail gracefully. 08:57:04 For example maybe I only want to support bounded recursion where it can decide on the bound up-front. 08:57:59 It's true that I hadn't thought carefully about dynamic libraries, they're kind of tricky because they're all indirect jumps. 08:58:32 Then your "calling convention" is to keep track of the recursion count and "fail gracefully" (however you define that) if the call would mean the count is exceeded 08:58:33 You can have a standard calling convention where dynamic calls are guaranteed 8kB or whatever of stack space, and if they want more than that they can do a dynamic check. 08:58:52 Right. There are a lot of things that are more graceful than SEGV. 08:59:23 shachaf: that would make it tricky to have dynamic calls which make dynamic calls 08:59:24 "why the REX bits are called W R X B" => R for "register" because it extends the field that usually specifies a register operand (though sometimes chooses between instrs), X for "index" because it extends the field that gives the (scaled) index register for memory operands, B for "base" because it may extends the field that gives the base register 08:59:25 (though may also extend the other register operand for reg-reg instructions), and W for "word" because it can determine word size between 32 and 64 bits 08:59:45 -!- relrod has quit (Ping timeout: 268 seconds). 09:00:00 of course sometimes some of those mnemonics are meaningless, because in some instructions some of the bits are ignored or must be zero 09:00:03 Taneb: Hmm, maybe. How frequent is that? 09:00:13 I don't know 09:00:15 wob_jonas: Aha. Thanks. 09:00:28 -!- relrod has joined. 09:00:30 -!- relrod has quit (Changing host). 09:00:30 -!- relrod has joined. 09:00:34 relrod: helrod 09:01:28 I think dynamic linking is mostly a bad idea for many reasons, but this one can go on the list. 09:01:47 I think dynamic linking is mostly a good idea 09:02:44 imagine having to reinstall every fricking executable on my debian whenever some bug is fixed in one of the frequently used libraries that is currently a shared library 09:03:12 you may still want to link some things statically of course 09:03:33 What if the bug fix requires an API change? 09:04:42 That's quite rare, in my experience 09:06:21 I feel like you're describing an infrequent case with a relatively small benefit, though I don't know. 09:06:28 shachaf: You really have to convince people... C/C++ users foremost... that failing more gracefully than producing a segmentation fault is actually desirable and useful. For that, you have to figure out what that more graceful way would be, especially in the context of C (C++ has exceptions, arguably that makes it easier to do something useful.) 09:06:32 This only really matters for security bugs probably. 09:06:55 int-e: Returning an error? 09:07:08 returning an error from where? 09:07:18 The call that ran out of stack space. 09:07:25 I'm invoking a void foo() 09:07:35 If foo can fail its return type shouldn't be void. 09:07:59 I don't think you'll convince anybody that way. 09:08:00 If you "fail gracefully" remember to clean up all the resources you allocated... and hope the cleanup code doesn't also need to "fail gracefully" 09:08:16 This sort of thing seems like a basic requirement for reliable software? 09:08:28 Nobody's going to go over millions of lines of code base with void functions and change them. 09:08:36 Oh, I only mean for new programs. 09:08:41 (should write billions) 09:08:52 no, it really depends on what software it is 09:08:56 Existing software can keep using guard pages if you want, I just want my programs to be reliable. 09:09:12 in most cases, in programs I write, most errors don't have to be handled gracefully, I don't have to clean up anything, just print an error message and exit 09:09:30 then I find the very few actual errors that will happen often, and handle those 09:09:30 Well for your own programming language you can define your own ABI. 09:09:43 this is because I mostly write programs for research, not for production 09:09:45 cpressey: I mean, sure, these are problems, but they're problems you already have to solve to write reliable and resilient software. 09:09:55 int-e: I will! I'm just wondering whether this exists anywhere. 09:10:00 (Did you know that LLVM knows about GHC's calling convention?) 09:10:00 so only I run them, and I will edit and rerun them when they hit a case that I don't handle 09:10:13 it would take four times as long if I tried to handle every possible error gracefully 09:10:23 shachaf: Erlang promotes writing reliable and resilient software by *expecting* that processes will sometimes crash 09:10:38 int-e: Here's an example of a bug that came from not having clearly defined stack bounds: https://marcan.st/2017/12/debugging-an-evil-go-runtime-bug/ 09:10:47 (I didn't up to a short while ago. I have not checked the details.) 09:10:51 and most of those errors are either detecting internal logic bugs in my program, or warning me that the input has something that I thought it didn't have and so I don't have code to read it properly 09:10:59 That seems far simpler that trying to guess all the ways they might crash and all the cases you need to would need to handle to make them "fail gracefully" 09:11:00 cpressey: But it doesn't expect individual function calls to crash. 09:11:20 Certainly the thing you're talking about is important for building a large resilient system. 09:11:34 But it's not going to be a fractal system where you expect every instruction to potentially fail. 09:11:34 cpressey: exactly, which is why I don't have to do cleanup for the errors in particularly, because I expect they can fail in ways that I don't expect so whatever that causes I have to be able to clean up too 09:11:37 shachaf: Individual function calls can raise an exception, which, if unhandled, causes the process to crash 09:11:52 Exceptions are almost certainly a bad idea for reliable software. 09:12:04 (And probably for other software?) 09:12:16 so I store the data on the file system in such a way that I can recover from the state I can get from a crashed process or power failure 09:12:38 shachaf: only _handling_ an exception is usually a bad idea 09:12:38 shachaf: exceptions solve the API problem presented by functions that cannot return an error condition. 09:12:43 int-e: I did know that they had something though not the exact details. I was under the impression it was some small subset of the GHC calling convention. 09:12:50 raising an "exception" that's actually a fatal error that you can't catch is not a bad idea 09:12:54 it's just strange terminology 09:13:18 int-e: They don't, because if you want your software to be reliable you need to be able to reason about all the ways control flow could go. 09:13:37 So "void foo();" only makes the control flow harder to see. 09:13:46 shachaf: I really don't know details. I was surprised to find that LLVM knows anything about GHC at all. 09:13:58 shachaf: I can always walk into the server room and turn the computer off. How do you reason about that "flow control"? 09:14:06 (It does make sense. But I didn't expect it.) 09:14:15 shachaf: Maybe more to the point, you need a definition of "reliable". 09:14:50 shachaf: Arguably the proper way to handle stack overflows is to enlarge the stack. 09:15:27 int-e: Not forever! 09:15:41 And the proper way to handle OOM is to pause the program until an engineer adds more memory to the computer. 09:15:42 I mean, you could say it's true forever, but now you need to handle out-of-memory errors on every function call. 09:15:55 What if your computer is in space or something? 09:16:51 shachaf: I've heard of aerospace engineering teams being forbidden to use recursive code. 09:16:54 I suppose for embedded systems you basically disallow recursion. 09:17:15 And if you can't recurse, you don't even need a stack, really 09:17:42 Or, at least, you can statically determine how much of it you'll need. 09:18:06 shachaf: hey :P /late 09:18:13 shachaf: how do you fail gracefully in that context? 09:18:21 Which context? 09:18:26 shachaf: space crafts 09:18:34 Oh, you probably disallow recursion. 09:19:21 I guess you have non-mission-critical stuff which you can afford to process on a best effort basis. 09:20:57 It seems to me like it's easy to turn a system that's reliable and predictable into one that isn't (e.g. by adding useful features), but it's impossible to go the other way. 09:21:50 I certainly think it's reasonable for a program to want to allocate all its memory up front so it can avoid malloc errors. 09:22:02 (Not that that's realistic on Linux.) 09:22:54 shachaf: I was serious when I said you should define "reliable". 09:23:17 It's much easier to hit an explicitly identified target. 09:23:57 That's true. 09:24:13 I don't have a comprehensive definition or anything. I guess there are things like https://en.wikipedia.org/wiki/MISRA_C 09:24:48 It seems easy enough to point out a particular way that software could fail and say that eliminating it would make it more reliable. 09:25:14 shachaf: Are you making up a new language for this, or an existing one, or slightly modifying an existing one? 09:25:59 Let's say making up a new language. 09:26:05 But also wondering about existing systems? 09:32:29 If you're making up a new language then you have a lot more freedom to design it to try to solve some of the problems at the language level. 09:32:54 Neat. Like what? 09:33:40 I'm thinking: End of stack pointer is stored somewhere and available for checking for dynamic cases. 09:33:52 To go back to what you said about bounded recursion, you could make the language only allow bounded recursion, instead of checking if a function exceeds some recursion bound at runtime 09:34:06 Function types are annotated with maximum stack usage, so if the call graph is acyclic everything can be solved statically. 09:34:35 Only allowing bounded recursion seems like a pretty strong constraint. 09:35:12 I mean, especially if it has to be statically bounded (rather than e.g. bounded by an argument). 09:35:22 So you want to allow unbounded recursion but you also want it to be reliable 09:35:53 He wants a Turing Machine... well okay... a RAM machine. 09:36:39 Ah pronouns. 09:36:42 `? shachaf 09:36:43 Queen Shachaf of the Dawn sprø som selleri and cosplays Nepeta Leijon on weekends. He hates bell peppers with a passion. He doesn't know when to stop asking questions. We don't like this. 09:36:55 Queen/He. Okay, mystery solved. 09:37:33 I want to allow programmers to do what they want, and make it easy to do the reasonable thing. 09:38:27 I think dynamic checks are pretty reasonable. Say you have a recursive function to search a tree or something, and you run out of stack space. You return an error. That seems fine to me? 09:38:52 `` dowg | grep Queen 09:38:55 11837:2019-06-17 `` perl -pi -e\'s/Bond\\K/, Queen Elizabeth the first/\' wisdom/p*aneb* \ 9872:2016-12-05 slwd shachaf//s/^/Queen / \ 7390:2016-04-14 le/rn victoria/Queen Victoria is the most victorious queen the world has ever known, even having won at the not dying contest. 09:39:43 what? 09:39:49 was that really me? 09:40:01 oh, I do remember that 09:40:19 it's just strange that I used \K 09:40:34 All the things I'm saying are kind of exploratory, but this is my general attitude. I don't want to make it impossible to avoid these things, I just want the mechanism to be available. 09:41:11 perl -pi -e presumably makes perl generate a position-independent executable 09:41:51 shachaf: presumably, since this is x86_64 so executablesa re position-independent by default 09:44:07 But perl doesn't normally generate an executable in the first place. 09:44:35 well, then it's doubly redundant 09:45:07 I don't know why I'm a queen but I guess I'll take it. 09:45:40 ask boil̈y 09:46:12 `` echo wisdom/p*aneb* 09:46:14 wisdom/people who taneb is not 09:46:28 shachaf: seems you have fungot to thank for that one 09:46:29 `? people who taneb is not 09:46:30 elliott, a rabbi, Mark Zuckerberg, James Bond, Queen Elizabeth the first. Pending approval: Shigeru Miyamoto. 09:46:54 shachaf: https://esolangs.org/logs/2016-12-05.html#lRl 09:46:56 Taneb: Which rabbi are you not? 09:47:35 shachaf: the one from Fiddler on the Roof 09:47:50 Shouldn't you be in bed right now 09:48:08 I never saw that. 09:48:21 But a few days ago I sang some of the songs from it, in Bodega Bay. 09:48:22 Neither have I, but I was once in a production of it 09:48:26 `quote elizabeth 09:48:27 992) I've also pretended to be Queen Elizabeth the first, but that was a desperate plea for attention 09:48:30 I played the rabbi 09:48:55 `` wc -l quotes 09:48:55 1333 quotes 09:48:57 wob_jonas: I can picture exactly where I was when I pretended to be Queen Elizabeth the first 09:49:01 `quote 1333 09:49:02 1333) `unidecode ⧸🙼 ​[U+29F8 BIG SOLIDUS] [U+1F67C VERY HEAVY SOLIDUS] it is with a very heavy solidus that i write to inform you that unicode has too many code points 09:49:38 Does `quote special case numbers? or does it just find all the quotes that mention the number 09:49:43 dunno 09:49:48 `quote 1 09:49:49 1) EgoBot just opened a chat session with me to say "bork bork bork" 09:50:02 Seems like the former. 09:50:15 ``which quote 09:50:19 ​/srv/hackeso-code/multibot_cmds/lib/limits: line 5: exec: `which: not found 09:50:24 `cbt quote 09:50:26 ​#!/bin/sh \ allquotes | if [ "$1" ]; then \ if expr "$1" + 0 >/dev/null 2>&1; then \ sed "$1q;d" \ else \ grep -P -i -- "$1" \ fi \ else shuf --random-source=/dev/urandom -n 1; fi 09:51:10 What's shuf's normal random-source? 09:51:12 Yeah, that looks like it's special casing numbers 09:51:39 `` allquotes | strace -fo tmp/OUT shuf -n 1 09:51:39 326) Hmm. I guess it's nearby GRBs that would be problematic? Sgeo, if by 'problematic' you mean 'what's that in the AAAAAAAAARRRRRGGGGHHHH'. 09:51:41 `url tmp/OUT 09:51:42 https://hack.esolangs.org/tmp/OUT 09:51:58 That looks like urandom. 09:52:14 `1 dobg quote 09:52:18 1/1:9771:2016-11-24 sled bin/quote//s,shuf,shuf --random-source=/dev/urandom, \ 978:2012-12-09 revert \ 977:2012-12-09 cp bin/quote bin/realquote; echo -n $'#!/bin/sh\nsleep 1\nrealquote "$@"\n' > bin/quote \ 0:2012-02-16 Initïal import. 09:52:31 oer 09:52:45 hah. perhaps there was an older version of `shuf` that used /dev/random instead? 09:53:12 Taneb: I heard there are questions that cross your eyes when posed. Is that true? 09:54:31 `quote 2011 09:54:31 No output. 09:54:44 Looking at the logs from that time, oerjan was spreading false rumors about /dev/urandom. 09:54:52 shachaf: I believe so. "Can you cross your eyes?" might make me cross my eyes when posed 09:55:07 `quote 124 09:55:08 124) I love logic, especially the part where it makes no sense. 09:55:16 yes, it special-cases numbers 09:55:18 `quote 124 09:55:19 64) Note that quote number 124 is not actually true. 09:56:07 `q 124 09:56:09 124) I love logic, especially the part where it makes no sense. 09:56:10 `' 124 09:56:12 124) I love logic, especially the part where it makes no sense. 09:56:13 `" 124 09:56:14 141) comex: what? *vorpal comex: hi, tab-complete completed c to comex instead of Vorpal, dunno why \ 237) okay see in my head it went, you send from your other number smth like "i'd certainly like to see you in those pink panties again" and she's like "WHAT?!? Sgeo took a pic?!?!?! that FUCKING PIG" 09:56:34 `quotes 124 09:56:34 124) I love logic, especially the part where it makes no sense. 09:57:45 -!- atslash has joined. 09:58:41 I guess dynamic libraries calling other dynamic libraries is actually reasonably common. 09:59:15 If the whole system was built with this thing in mind, you could maybe do something fancy during relocation. 09:59:20 But that's almost certainly a bad idea. 09:59:35 Instead you should just ban dynamic libraries. 10:00:34 `fetch http://slbkbs.org/tmp/out.a tmp/out.a 10:00:39 http:/slbkbs.org/tmp/out.a: No such file or directory 10:00:45 `fetch tmp/out.a http://slbkbs.org/tmp/out.a 10:00:47 2019-08-08 10:00:46 URL:http://slbkbs.org/tmp/out.a [923/923] -> "tmp/out.a" [1] 10:00:54 `tmp/out.a 10:00:55 ​/srv/hackeso-code/multibot_cmds/lib/limits: line 5: /hackenv/tmp/out.a: Permission denied \ /srv/hackeso-code/multibot_cmds/lib/limits: line 5: exec: /hackenv/tmp/out.a: cannot execute: Permission denied 10:01:00 `` chmod +x tmp/out.a 10:01:02 No output. 10:01:04 `tmp/out.a 10:01:05 finally 10:01:08 finally 10:01:27 that program is so good 10:01:32 `file tmp/out.a 10:01:33 tmp/out.a: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, not stripped 10:02:27 shachaf: the system does do a lot of fancy things during relocation already. resolves strong symbols overriding weak symbols, has indirect function symbols that are resolved everywhere the first time they're called, etc 10:03:07 I want to generate a program that uses dynamic libraries, because that's the only way you can make software nowadays apparently. 10:03:17 So I just want it to do the minimum possible. 10:03:34 shachaf: you know you can mix and match stuff, link some libraries statically and some dynamically 10:03:48 Yes. But some things you have to link dynamically. 10:03:54 yep 10:04:36 On Linux I think it's pretty much only OpenGL and Xlib, and their dependencies such as libc. 10:05:20 On Windows system calls are only available via dynamic linking. 10:14:52 By the way, one reason I care about this stack usage thing is that it's relevant for implementing efficient coroutines. 10:18:41 shachaf: you probably already know this, but ais523 is the person to ask about this 10:19:03 That makes sense. 10:19:27 Except for the part where ais523 isn't here right now. 10:24:36 In a desktop context, if a program starts using excessive amounts of stack space, what I'd like to see is the OS staying responsive, so that I can find the process and kill it. 10:25:13 Yes, that's a good OS feature. 10:25:22 It's pretty ridiculous how broken things are. 10:25:23 The program shouldn't be responsible for deciding what "excessive amounts" are, so it's an OS-domain thing. 10:27:27 cpressey: you can set a soft ulimit, in which case the program gets a signal 10:27:49 That seems like a point about memory consumption in general (of which "the stack" should only be a tiny fraction). 10:28:05 Of course stacks are just regular memory. 10:28:14 shachaf: there's a specific setrlimit for stack space 10:29:17 Sure, but you can put your stack pointer wherever you want. 10:30:07 The limit only applies to the "process stack" and not thread stacks anyway, I think. 10:38:34 -!- Phantom__Hoover has quit (Ping timeout: 272 seconds). 10:40:05 -!- john_metcalf has quit (Quit: http://corewar.co.uk). 10:42:27 salpynx: https://en.wikipedia.org/w/index.php?title=Iota_and_Jot&diff=909854762&oldid=909568636 -- I didn't know that "comprised of" was so difficult :-/ 10:46:40 I don't know what it means to compose sequences, so that seems less clear to me. 10:49:23 https://en.wikipedia.org/wiki/User:Giraffedata/comprised_of <-- that user is on a vendetta against this usage. 10:50:06 Using that word correctly always sounds wrong to me, so I just avoid it entirely 10:50:28 But the new phrasing isn't accurate either. 10:50:32 "consists" seems better. 10:51:12 It's true that "comprise" is more often used in the reverse sense. 10:52:37 * int-e rephrases to "consisting of" 10:52:45 shachaf: frankly I don't know how it works, but my guess is that it applies to the total of all stacks 10:52:56 It seems not. 10:53:01 the kernel knows which mappings are stacks because they're set to auto-grow downwards 10:53:07 That would certainly be unexpected to me. 10:53:21 What does it mean for stacks to auto-grow downward? 10:53:35 Compared to memory which is mapped and gets faulted in on demand. 10:53:41 Do you mean stack memory gets mapped on demand? 10:53:54 that if you write in a part near the bottom of the stack, its size is extended 10:53:58 but I might be completely wrong here 10:54:15 maybe that applies only to x86_32, where the address space is sparse 10:54:23 s/sparse/tight/ 10:54:30 let me see 10:54:48 If it's tight that might mean people map memory near the stack, in which case you're saying the auto-growing stops? 10:55:00 My impression was that a fixed amount like 8MB was mapped at startup and that's that. 10:55:51 shachaf: yes, but the mappings are placed in the address space a semi-smart way so that won't happen often 10:57:10 shachaf: there's a MMAP_GROWSDOWN flag of mmap for autoextending, but it's possible that it's not actually used for stacks 10:57:19 http://man7.org/linux/man-pages/man2/mmap.2.html 10:57:51 "Touching an address in the "guard" page below the mapping will cause the mapping to grow by a page. This growth can be repeated until the mapping grows to within a page of the high end of the next lower mapping, at which point touching the "guard" page will result in a SIGSEGV signal." 10:57:53 it says 10:58:08 wob_jonas: It's hard to imagine what that would be used for *besides* stacks 10:58:22 cpressey: the manpage explicitly says that it's for stacks 10:58:36 but it's possible that it's not used at all these days, 10:58:45 or only on certain old architectures 10:58:52 or old types of executables or somethin 10:59:10 but here's the problem: 10:59:23 ``` grep stack /proc/$$/maps 10:59:24 7fbfb9a000-7fbfbbb000 rw-p 00000000 00:00 0 [stack] 10:59:36 the flags field is 0, so it doesn't actually grow down 11:00:13 do we have a typical libc-based x86_32 executable somewhere on hackeso so we can test how that behaves? 11:01:40 so I'm probably wrong about this 11:03:26 "After some tests on an x84_64 Debian machine, I've found that the stack grows without any system call (according to strace)." 11:03:38 https://unix.stackexchange.com/questions/145557/how-does-stack-allocation-work-in-linux 11:04:30 sure, but does it grow the mapping, or does it just fault in MAP_NORESERVE pages? 11:05:09 ``` perl -e print(0x7fbfb9a000-0x7fbfbbb000) 11:05:09 bash: -c: line 0: syntax error near unexpected token `(' \ bash: -c: line 0: `perl -e print(0x7fbfb9a000-0x7fbfbbb000)' 11:05:16 ``` perl -e 'print(0x7fbfb9a000-0x7fbfbbb000)' 11:05:17 ​-135168 11:05:28 that's definitely not 8 megabytes 11:07:24 wob_jonas: Thinking about it, it's hard to imagine the kernel implementing MAP_GROWSDOWN in a way that doesn't involve a fault. Well, maybe on some hardware, but...? 11:07:52 I guess it's there to give the kernel the freedom to implement it one way or another, depending on hardware? 11:08:31 It seems like another bizarre way to make things unpredictable. 11:08:33 And, the flag should still be shown in /proc/'s view of it? 11:08:38 Idk 11:08:41 What if someone accidentally maps pages near the current end of the stack? 11:08:51 cpressey: of course it involves a page fault. everything involves a page fault, including allocating mapped pages that weren't used before, or bringing pages in from swap, or just the kernel doing whatever at whim. but the page fault is handled in the kernel, it never raises a signal visible to the process. 11:09:21 the process can technically tell whether a page is mapped, but it should not do that except for performance optimizations or debugging 11:09:32 faulting pages in is mostly invisible in small amounts 11:09:59 obviously it's visible when firefox fills up all my ram and my hard disk starts whirring and the whole system freezes to a halt 11:10:04 but extending the swap a little won't involve that 11:10:09 hmm that is fairly round, 128k plus 4k 11:11:11 shachaf => they won't, because the kernel and libc has heuristics and knobs in sysctl for what address range to map things at, so a single-threaded stack is mapped in places where you can't *accidentally* map something below (you can map something deliberately, but that's your problem), 11:11:41 wob_jonas: Well, trying to answer shachaf's question, "What does it mean for stacks to auto-grow downward? Compared to memory which is mapped and gets faulted in on demand." -- it does get faulted in on demand, by the kernel, transparent to the userland process 11:11:46 wob_jonas: OK, I tested it and it does indeed grow. 11:11:47 for multi-threaded it's the problem of the thread library and you may have to specify a hint for the stack space you need if you are starting LOTS of threads, but then you shouldn't start lots of threads 11:11:55 shachaf: on what architecture? 11:11:59 amd64 11:12:08 I see 11:12:31 anyway, on x86_32 this made sense because the address space was small, and some processes used lots of stack while others used lots of heap 11:12:43 I could imagine a userspace implementation of growing the stack, that the compiler handles via signals or something, but... why? 11:13:16 on x86_64 right now it's less important, because we have significantly less RAM than address space, but this may change in our lifetime 11:13:46 cpressey: yes, you can do such a stupid thing. there is an interface for handling SIGSEGV gracefully, but it's hard to get right and good only for silly tricks 11:13:55 esoteric tricks really 11:14:33 the kernel gives the process all the info about the segfault that it knows in the sa_siginfo or whatever that's called, so the process can know where the fault is 11:14:58 you can implement user-space swapping that way, or unusual garbage collectors 11:15:25 Tbh, I hate memory mapping and signals. As abstractions. They're ugly. They're performant, so I see why they're used, but that doesn't mean I have to think they're pleasant. 11:15:43 but it's a magnitude more dangerous than just the usual cases when you try to do something nontrivial in a signal handler 11:16:06 cpressey: yes, which is why we usually don't do esoteric tricks like this unless they're really needed 11:16:58 int-e: whoa, I thought that edit was a minor phrasing maybe-improvement, but it comes with an essay and it's own project? I'm going to have to read the essay and see if I agree with their point, I have no strong opinion. Correct logic relating to the subject is more important. 11:17:04 `` echo $'#include \n#include \n#include \nchar buf[1024]; void print_stack() { FILE *f = fopen("/proc/self/maps", "r"); while (fgets(buf, sizeof buf, f) != 0) { if (strstr(buf, "[stack]") != 0) { printf("%s", buf); break; } } fclose(f); } int main(int argc, char **argv) { while (1) { print_stack(); alloca(1024); } return 0; }' >/tmp/f.c;gcc -o /tmp/f /tmp/f.c;/tmp/f>tmp/OUT 11:17:05 we just let the kernel guys handle swapping, whether it's swapping to disk, to compressed RAM, or (sigh) to network 11:17:07 ​/hackenv/bin/`: line 5: 63 Segmentation fault /tmp/f > tmp/OUT 11:17:11 `url tmp/OUT 11:17:11 https://hack.esolangs.org/tmp/OUT 11:17:38 ``` perl -e 'print(0x7fbf0d2000-0x7fbf8cf000)' 11:17:39 ​-8376320 11:17:58 8675309 bytes is the maximum. Figures. 11:18:07 I'll stick to writing interpreters in Haskell where I can pretend everything is just math. Beautiful, beautiful math. Which I am bad at. 11:18:15 shachaf: I think that may depend on sysctl stuff and perhaps setrlimit 11:18:43 Nope, it's always that number. 11:18:59 cpressey: yeah, that aligns with the general good advise to not optimize prematurely 11:19:12 No it doesn't? 11:19:22 I mean, depends on what you're doing. 11:19:26 cpressey: but this is #esoteric so we sometimes talk about silly optimizations 11:20:29 shachaf: dude, just last evening I made a program to search solutions to int-e's problem in python, even though that means that the integers take like 128 bytes of memory rather than just the 24 or 32 bytes that I would need in C++ 11:20:34 the whole thing is sluggish 11:20:39 I should rewrite it in C++ 11:20:54 but I did at least get preliminary results, and know what my inner loop is and what I would have to rewrite if I wanted to do it better 11:21:10 wob_jonas: I presume that int-e's problem is something other than "someone edited my Wikipedia edit to not use 'comprised of'" 11:21:31 Taneb: http://esolangs.org/logs/2019-08-02.html#lY 11:22:04 wob_jonas: The arguments you made about 32-bit stack usage don't work together. 11:22:29 shachaf: why? 11:22:44 If it's designed the way it is to save on address space, you might accidentally map something into that area (by using the rest of your address space). 11:23:00 shachaf: no you can't. 11:23:05 So it could certainly be an accident, in exactly the cases that it's supposed to be helping. 11:23:21 If accidental mappings into that area can't happen, why not just pre-map the whole region? 11:23:28 shachaf: as a simple model, assume that the stack is mapped near the top of the user address space, and everything else is growing from the bottom of it 11:23:58 shachaf: we don't premap because we don't know if you'll have 256 M of stack and 256 M of heap or 2M of stack and 500 M of heap in an 1G address space 11:24:16 wob_jonas: does it have to have all those numbers as prime factors, or just some subset of them? 11:24:40 Just a subset according to https://en.wikipedia.org/wiki/Smooth_number 11:24:53 shachaf: but it's possible that it wasn't because of address space, but because when this was invented, they didn't have MAP_NORESERVE yet 11:25:09 Taneb: just a subset, at least the way I interpreted it 11:25:25 if it was all those numbers, you'd just have to divide the problem by one of each first and get a smaller problem 11:26:26 Taneb: https://esolangs.org/logs/2019-08-08.html#lL is the best I got so far. I'll do a better search, but I'm busy with other things 11:26:47 Taneb: I should indeed be in bed right now. 11:28:20 or maybe they didn't have a way to not allocate the supporting structures that take up like 1/1000 or 1/500 of the memory mapped, which could be a lot on old systems if you have 100 processes with 8M stack mapping each 11:30:30 or maybe it's for some other historical reason that isn't worth to change now 11:34:47 `? oots 11:34:49 oots? ¯\(°​_o)/¯ 11:34:50 `? o 11:34:51 o is a popular comedy fantasy webcomic. It's about a group called the Order of the Stick, as they go about their adventures with minimal competence, and eventually stumble into a plan by an undead sorcerer to conquer the world, and they're out to stop him and conquer their personal problems at the same time. Hopefully not in that order. 11:43:35 tfw an argument breaks out about whether the filename-selecting regex in the config file should be written as /^.*\.(ts|tsx)$/ or as /^.*\.tsx?$/ 11:46:45 the answer to that is obvious 11:47:29 `perl -e print(rand(2)<1 ? "it should definitely be written as /^.*\.(ts|tsx)$/" : "it should definitely be written as /^.*\.tsx?$/" 11:47:30 syntax error at -e line 1, at EOF \ Execution of -e aborted due to compilation errors. 11:47:33 `perl -e print(rand(2)<1 ? "it should definitely be written as /^.*\.(ts|tsx)$/" : "it should definitely be written as /^.*\.tsx?$/") 11:47:33 it should definitely be written as /^.*.tsx? 11:47:42 argh 11:47:48 `perl -e print(rand(2)<1 ? "it should definitely be written as /^.*\.(ts|tsx)\$/" : "it should definitely be written as /^.*\.tsx?\$/") 11:47:48 it should definitely be written as /^.*.(ts|tsx)$/ 11:48:04 the other way is unclear and hard to read and should never be committed to production code 11:50:08 -!- Melvar has quit (Quit: rebooting). 11:59:52 -!- Melvar has joined. 12:10:08 [ (q: , 0j_2": (9<.@%~_1+10x^50)&-) 11111111111269581656547160489766631945078430800000x 12:10:09 wob_jonas: |domain error 12:10:09 wob_jonas: | (q:,0j_2":(9<.@%~_1+10^50)&-)11111111111269581656547160489766631945078430800000 12:10:19 [ (0j_2": (9<.@%~_1+10x^50)&-) 11111111111269581656547160489766631945078430800000x 12:10:19 wob_jonas: _1.58e38 12:10:30 [ (q:) 11111111111269581656547160489766631945078430800000x 12:10:30 wob_jonas: 2 2 2 2 2 2 2 3 3 3 3 3 5 5 5 5 5 7 7 7 7 7 7 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 13 13 13 13 13 13 13 13 19 19 19 19 19 19 19 29 12:15:42 ^ int-e: that's the best you can get with prime factors up to 29 by the way 12:16:09 int-e: The "regular language Jot" is a subset of "all sequences of 0 and 1" according to the definition of a formal language, if we understand 'all sequences' to also include the empty string. In this case 'all sequences of 0 and 1' is the whole, and contains Jot (a subset). So 'all.seq.0.1 contains Jot' => 'all.seq.0.1 comprises Jot'. Flip it to t 12:16:10 he passive => "Jot is comprised of all all.seq.0.1" 12:16:25 which is what you wrote originally with "Jot is the regular language comprised of all sequences of 0 and 1" 12:16:41 salpynx: I think it's exactly the set of all sequences of 0 and 1 12:17:23 Jot is a programming language, not a formal language 12:17:28 As I understood it 12:17:53 taneb: yes, that's the technicality that makes both versions equally awkward, but allows my somewhat contrived and cheeky justification to hold ;) 12:18:04 [ (":@q: ,' | ', 0j_2":-&(9<.@%~_1+10x^50)) 11111111111194529560647830327114462838933000000000x 12:18:05 wob_jonas: 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3 5 5 5 5 5 5 5 5 5 7 7 7 7 7 7 7 7 7 7 7 7 11 17 17 19 19 19 19 19 19 19 19 19 23 23 23 23 23 29 71 | 8.34e37 12:18:06 Unless all Jot does is accept or reject a program. In which case it's a set of strings, i.e. a formal language 12:18:23 cpressey: yes, it's a programming language 12:18:26 and in that case it's almost certainly not "any sequence of 0 or 1s" because that's not a very interesting language 12:19:02 [ (":@q: ,' | ', 0j_2":-&(9<.@%~_1+10x^50)) 11111111111185248065004566815208736562760436940800x NB. so far this is worse than the lucky one that I got yesterday 12:19:03 wob_jonas: 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 5 5 11 11 11 11 17 17 17 17 17 17 17 17 17 17 19 19 19 19 23 29 29 29 29 31 47 | 7.41e37 12:19:23 I believe the set of syntactically valid jot programs is equal to the set of all sequences of 0 and 1 12:19:24 [ (":@q: ,' | ', 0j_2":-&(9<.@%~_1+10x^50)) 11111111111161923559652900718659162521362304687500x 12:19:25 wob_jonas: 2 2 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 7 7 7 7 7 7 11 13 13 13 17 17 17 17 17 17 23 31 59 | 5.08e37 12:19:30 int-e originally wrote "Jot is the regular language", so I was using that as a starting point 12:19:47 taneb: that is correct 12:19:52 https://esolangs.org/wiki/Jot "Every combination of 0's and 1's is a syntactically valid Jot program, including the null program." 12:20:00 including the empty string 12:23:00 taneb: your point re. them being equal is key, I was using 'subset' to deliberately (and misleadingly) imply 'proper subset', but rely on the technical meaning of S ⊆ S 12:23:47 I should make this program output the factoring so that I don't have to enter this command here 12:23:56 but then, this command "proves" that I'm not cheating 12:23:59 The tone of the anti-"consists of justification 12:24:24 made me want to construct some kind of argument to justify it on a technicallity 12:25:08 I think you can extend any programming language to have a syntax where any string of symbols is syntactically correct but results in some uninteresting semantics such as terminating immediately and producing no output 12:25:21 cpressey: in Jot it's actually interesting 12:25:58 there's an xkcd for this, effect an effect #326 12:27:15 [ (":@q: ,' | ', 0j_2":-&(9<.@%~_1+10x^50)) 11111111111140890057058176051913882460557854562500x 12:27:16 wob_jonas: 2 2 3 3 5 5 5 5 5 5 7 7 11 11 11 11 11 11 11 11 13 13 17 19 19 19 19 19 19 19 19 19 19 19 19 19 19 23 23 23 23 29 29 59 59 | 2.98e37 12:27:23 still too big 12:28:37 Saying "There's an xkcd for this" is the modern version of quoting Bible verses. 12:31:45 Taneb: my point was trying to be something about how some languages have this thing called "syntax" and others don't and you can always take "syntax" away if you like. 12:32:39 It's like a front-line triage to eliminate a class of programs we think you won't be interested in running because we haven't defined any particularly interesting meanings for them 12:34:37 cpressey: I think you're right, Jot isn't a formal or regular language, so that's a problem with the sentence. How would you phrase it to indicate that that some regular language all.seq.0.1 describes the syntax of Jot, which I think is the intended meaning 12:36:47 "The syntax of Jot is comprised of the regular language comprised of..."? 12:38:15 "Any sequence of 0's and 1's is a syntactically valid Jot program" ? 12:38:51 there's a news article about this wiki editior: https://medium.com/backchannel/meet-the-ultimate-wikignome-10508842caad I think I shouldn't be trying so hard to counter their pet peeve 12:39:16 I'm deliberately trying to phrase it in a conversational, almost high-school level way 12:40:02 But the smallest change to the page as I see it now might just be "The syntax of Jot is the regular language..." 12:40:05 cpressey: that's constructive. I should go with that :) 12:41:25 is a syntax 'equal to' a formal language? 12:42:12 salpynx: Yes, I think that's fair to say. 12:43:33 ok, I wasn't sure. We are in pedantic territory here, I wanted to be sure. 12:50:03 [ try =: ":@q: ,' | ', 0j_2":-&(9<.@%~_1+10x^50) 12:50:04 wob_jonas: |ok 12:50:08 [ try 11111111111122460609418029716397205124244969250000x 12:50:09 wob_jonas: 2 2 2 2 3 3 3 3 3 3 3 3 3 3 3 3 3 5 5 5 5 5 5 11 11 11 13 13 13 17 17 17 17 17 17 17 19 19 19 29 29 29 29 29 29 29 29 29 29 83 97 | 1.13e37 12:55:28 "All sequences of 0 and 1 comprise the regular language that is the syntax of Jot." {0,1}* ⊇ Jot syntax "The syntax of Jot is the regular language comprised of all sequences of 0 and 1" Jot syntax ⊆ {0,1}* 13:01:52 I'll stop now, and sleep on it before making any wiki edits. I was enjoying the counter-pedantry, not sure these really are good edits. Making the first edit and prompting the wiki user to convert it to the passive 'comprised of' version since Jot is the focus of the article would be funny. 13:02:12 All languages are a bit silly. 13:03:40 Strict natural language grammar only seems valid when the last natural speaker is dead. 13:13:27 > let x = 11111111111111111215673466122483207680856045806875; y = 10^50 `div` 9 in (fromIntegral (x - y) :: Double, fromIntegral x - fromIntegral y :: Double) 13:13:32 (1.045623550113721e32,0.0) 13:14:57 -!- howlands has joined. 13:23:35 [ try 11111111111269581656547160489766631945078430800000x 13:23:35 wob_jonas: 2 2 2 2 2 2 2 3 3 3 3 3 5 5 5 5 5 7 7 7 7 7 7 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 13 13 13 13 13 13 13 13 19 19 19 19 19 19 19 29 | 1.58e38 13:24:04 yeah, I've seen that one already 13:25:09 wob_jonas: this wasn't a new result; it was just an experiment how bad the cancellation with double precision would be :) 13:26:21 11111111111269581656547160489766631945078430800000 is the optimum for the first 11 and 12 primes. 13:26:39 int-e: sure, the one I entered isn't really new either, I just started a search with different parameters while the first one is still running (for an expected three more hours), and it found this while it's still searching the part of the search space that I've already searched fully 13:26:59 it will get in a disjoint part of the search space eventually though 13:27:13 yeah 13:27:54 But I couldn't make my approach for finding the optimum work beyond 15 primes. (I actually ran out of memory (32GB here).) 13:28:13 sure, I'm not looking for the optimum with a given number of primes now 13:28:45 the three hour long search that I'm running could find solutions with 103 in them in theory 13:28:48 (restricting to the first n+m primes was the "first approach" in my paste) 13:32:43 the first search should eventually reproduce the best solution that I found during the night 13:37:09 [ try 11111111111112819215968651733403643249992663040000x 13:37:10 wob_jonas: 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 5 5 5 5 7 7 7 7 7 11 11 11 11 11 11 13 13 13 13 17 17 19 19 23 23 23 23 23 23 29 29 29 47 53 83 | 1.71e36 13:37:20 I'm still afraid that that one was a lucky fluke by the way 13:38:02 feel free to suggest a different target to check that theory :) 13:38:27 no need, I can just modify my search if I want 13:38:32 it still takes lots of time 13:38:47 and my search is still an inefficient single-threaded python3 program 13:39:34 it would probably become much faster if I rewrote it to an efficient C++ program 13:39:45 but I probably won't do that now 13:39:48 what are you using for arithmetic? 13:39:56 python's built-in integers 13:40:06 so... gmp 13:40:09 probably 13:40:23 Pretty sure that's what they use by default. 13:40:24 but note that most of the numbers are 1 or 2 or 3 words long, none longer than 3 words 13:40:34 Sure. 13:40:40 and I'm multiplying numbers so that the product doesn't exceed 3 words 13:41:31 I think the arithmetic isn't the slow part 13:41:38 But you're right; this means that gmp is probably not the bottleneck here; the interpreter overhead should be significant. 13:42:02 but the numbers are larger than fits in a small int and so allocated randomly spreaded in the python heap 13:42:29 so I have a lot of overhead on memory throughput 13:42:41 or L3 cache throughput 13:43:16 a proper prorgam would allocate these in a dense array, 32 or 24 or 16 bytes per number 13:43:52 (16 bytes means that I use approximate numbers, which means I need extra code to track the exact values for when I find a hit, but it'd be the most efficient) 13:44:50 [ try 11111111111111167179461296463398102111816406250000x 13:44:50 wob_jonas: 2 2 2 2 3 3 3 3 3 3 3 3 3 3 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 11 11 11 11 13 13 13 13 13 17 19 19 23 23 23 23 23 23 23 23 53 61 73 | 5.61e34 13:44:54 this is the one I found yesterday, right? 13:45:25 yes 13:45:28 yeah 13:45:32 ok, so the search reproduced it 13:46:00 I should have added code to print anything better than say 1e40, to know if this is a fluke 13:46:18 um, anything better than 1e38 rather 13:46:38 the code only prints the best found so far, so I won't know if it finds ten other 1e35 sized solutions 13:47:18 although technically I could modify the main part of the code and continue from where I stopped it, but I don't want to do that now 13:53:21 -!- salpynx has quit (Remote host closed the connection). 14:02:28 [ try 11111111111111158344848599503479968356777700860000x 14:02:28 wob_jonas: 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 5 5 5 5 13 13 13 13 17 17 17 17 17 17 17 17 17 19 19 19 19 19 19 19 19 19 19 31 89 103 | 4.72e34 14:02:49 now that one is new, and the best I've found so far 14:02:57 and, what do you know, 103 is a factor in it 14:03:17 int-e: ^ 14:04:27 [[User talk:A]] M https://esolangs.org/w/index.php?diff=65370&oldid=65322 * A * (+2810) /* Blocked */ 14:07:05 progress. 14:07:29 [[User talk:A]] M https://esolangs.org/w/index.php?diff=65371&oldid=65370 * A * (-606) /* Minimal J for Beginners */ 14:08:33 I wonder if I should try to make the memory access more predictable by really-deep-copying the large array, in the sense that I copy even the biginteger objects too (an unnatural operation on python) to have them reside mostly sequentially in memory 14:11:26 Hmm... 14:11:44 perhaps even 64-bit integers could be enough for the inner loop, and then recheck whenever I get an approximate match 14:11:57 could I implement that in python? 14:12:00 or 64-bit floats? 14:12:15 yeah, 64-bit floats could work 14:13:56 That would solve the problem of the inefficient allocation 14:14:10 of course I'd still need to keep the array of exact numbers, but those would be rarely used 14:17:10 wob_jonas: I almost hate to mention it but... I wonder how suited Julia would be for this 14:17:28 cpressey: yes, it would work too 14:17:39 I don't know how well the problem vectorizes, and it's not like vectorization is automatic 14:17:43 and C++ would work well too 14:19:41 cpressey: I don't know, you can take a stab at trying to solve this if you want 14:20:45 wob_jonas: I didn't quite catch what the precise problem is, do you have a link? 14:21:01 I gather it has something to do with prime factorization :) 14:21:17 http://esolangs.org/logs/2019-08-02.html#lY 14:22:57 I might try to write a more efficient inner loop 14:23:05 but I can't promise anything 14:23:35 factor: ‘11111111111111111111111111111111111111111111111111’ is too large 14:23:37 drat! 14:24:32 cpressey: 78875943472201*182521213001*25601*9091*5051*271*251*41*11 14:25:04 int-e: I tested that it didn't have all small prime factors, but didn't get a full factorization 14:25:35 * int-e used pari/gp 14:25:44 I doubt I will have much time to play with it 14:26:06 your "(10**50-1)/9" was a convincing enough nothing-up-my-sleeve number that I didn't think you'd cheat by choosing a number such that if you add a very small integer it happens to factor up totally 14:26:22 plus you already said what the best solution you had was 14:26:37 wob_jonas: I didn't intend to cheat... I wanted something where it was easy to see progress :) 14:26:43 (look for the first non-1 digit) 14:26:49 exactlyi 14:27:17 I don't find it easy to see progress because I can't count 15 ones by hand 14:27:39 which is why I have the computer print the difference in %e format 14:27:41 wob_jonas: well, I put the numbers in a file, one line each 14:37:07 13328592851757862349726964424185557245693157222400 14:37:25 [2,3,5,7,11,13,17,19,23,29,31,37,41,43,47,53,59,61,67,71,73,79,83,89,97,101,103,2,2,2,2,2,2,2,11,13,2,3,2,5,11,23,2] 14:37:33 [ try 13328592851757862349726964424185557245693157222400x 14:37:34 wob_jonas: 2 2 2 2 2 2 2 2 2 2 2 3 3 5 5 7 11 11 11 13 13 17 19 23 23 29 31 37 41 43 47 53 59 61 67 71 73 79 83 89 97 101 103 | 2.22e48 14:37:51 you don't have to use all of the primes 14:38:14 int-e: I know, but I thought it would be a good place to start 14:38:36 I at least fulfilled the letter of the challenge, given my limited time to work on it :) 14:39:00 [ try 11111112137520138469338239234808374004904964760870 14:39:01 int-e: |value error: try 14:39:01 int-e: | try 1.11111e49 14:39:12 [ try 11111112137520138469338239234808374004904964760870 14:39:12 int-e: |value error: try 14:39:12 int-e: | try 1.11111e49 14:39:13 [ try 11111112137520138469338239234808374004904964760870x 14:39:14 int-e: |value error: try 14:39:14 int-e: | try 11111112137520138469338239234808374004904964760870 14:39:21 int-e: you need to load my environment first, by like 14:39:25 j-bot, cd: 14:39:25 wob_jonas, changed to wob_jonas,#esoteric 14:39:34 int-e: try j-bot, load: wob_jonas 14:39:48 j-bot, safe: , 14:39:56 j-bot, save: , 14:39:56 wob_jonas, copied ,#esoteric from wob_jonas,#esoteric 14:40:09 oh, per user state 14:40:14 int-e: or you can j-bot, cd: , 14:40:20 j-bot, load: wob_jonas 14:40:20 int-e, copied int-e,#esoteric from int-e,wob_jonas 14:40:22 int-e: yeah, it was a strange decision 14:40:30 mm 14:40:33 int-e: you can also have multiple sessions in theory 14:41:11 anyway, 11111112137520138469338239234808374004904964760870 is the best possible if all primes <= 103 need to be used. 14:41:13 int-e: sorry, apparently wrong syntax 14:41:27 j-bot, load: int-e, 14:41:27 wob_jonas, copied wob_jonas,#esoteric from int-e,#esoteric 14:41:43 j-bot, clean: 14:41:43 wob_jonas, changed to wob_jonas,#esoteric and cleared it 14:41:52 j-bot, clean: , 14:41:52 wob_jonas, changed to ,#esoteric and cleared it 14:41:59 [ try =: ":@q: ,' | ', 0j_2":-&(9<.@%~_1+10x^50) 14:41:59 wob_jonas: |ok 14:42:09 (the target is 111...111/p_1*...*p_27 = 463255947, which is very feasible for brute force) 14:42:44 j-bot, pwd: 14:42:44 wob_jonas, working session is ,#esoteric 14:44:55 hmm, the three-hour long program is getting close to done 14:45:26 I won't run a longer version of that, instead I'll have to improve the program 14:48:51 [ try 11111111111115397052046616165917913561809835753472x 14:48:52 wob_jonas: 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 11 11 11 11 17 17 17 17 17 17 19 19 19 19 19 29 41 43 43 47 47 | 4.29e36 14:49:21 not particularly good, but at least it's new 14:53:32 [[User talk:A]] M https://esolangs.org/w/index.php?diff=65372&oldid=65371 * Areallycoolusername * (+199) repo is almost complete 14:53:56 [[User talk:A]] M https://esolangs.org/w/index.php?diff=65373&oldid=65372 * Areallycoolusername * (-2) 14:54:51 okay, that was stupid 14:55:13 So I tried (10^49-1)/9... 14:55:33 ...I got 1111111111111111140001838971183119543667539375000 which looked eerily similar to 11111111111111111400018389711831195436675393750000 14:56:20 Which is nice in that it indicates a certain robustness of the approach. But other than that this should be utterly unsurprising :) 14:56:51 (lesson: 10 is smooth) 14:57:53 int-e: sure, that's what you get if your search prefers small prime factors, like I do too 14:58:25 although the solution above happens to be not divisible by 5 14:58:26 Well, mine doesn't do that, exactly. but many of the solutions still feature small prime factors :) 14:58:52 I'm so happy that 11111111111111111215673466122483207680856045806875 is not divisible by 2 :) 14:59:06 heh 14:59:48 (I also wouldn't be surprised if it was actually optimal... but I see no way of proving such a thing.) 15:06:15 the three-hour search finished and didn't find anything better 15:20:28 -!- wob_jonas has quit (Remote host closed the connection). 15:22:51 -!- ais523 has joined. 15:33:20 ais523: I thought of some infinite initial conditions in Conway's Game of Life. For example, you could have an infinite barber pole. It doesn't need caps on the ends, it just goes on forever. 15:33:51 you can do that sort of thing in most cellular automata, I think 15:36:03 I don't think there's anything particularly philosophically problematic about them, it's just that I don't know if anyone studies them much 15:37:51 It starts getting problematic when you start considering the infinite configuration which enumerates (and thus contains) all possible finite configurations 15:38:26 If not problematic, then at least kind of weird 15:40:57 One could probably make a philosophical objection that, for a sufficiently complex infinite form, a single tick is doing an infinite amount of computational work 15:41:25 The barber pole is simple and repeatable and you can efficiently predict how any part of it will be at any tick 15:42:06 Anyway, just thoughts I had recently 15:42:41 that reminds me of a design in the Game of Life I made a while back, which contains a rake that moves at the maximum possible speed for a moving object 15:43:17 so that it creates a constant stream of spaceships that, no matter what they do or what mess they create, will never be able to create anything that shoots down the rake producing them 15:44:02 I was wondering if it would be possible to create some sort of infinitely active pattern like that, but what typically happens is that eventually an eater evolves out of the mess and neatly terminates the infinite stream of spaceships 15:53:48 [[Gulp]] N https://esolangs.org/w/index.php?oldid=65374 * Areallycoolusername * (+563) Created page with "'''Gulp''' is an [[esoteric programming language]] made by ][[User: Areallycoolusername|Areallycoolusername]]. It was made for golfing, and it is [[deque]]-based == Specifics..." 15:54:06 [[Gulp]] M https://esolangs.org/w/index.php?diff=65375&oldid=65374 * Areallycoolusername * (-1) 15:54:27 by the way since it's CA time in here 15:54:31 [[Gulp]] https://esolangs.org/w/index.php?diff=65376&oldid=65375 * Areallycoolusername * (-2) 15:54:40 how much have people studied CAs on arbitrary / random graphs? 15:55:31 [[Language list]] https://esolangs.org/w/index.php?diff=65377&oldid=65207 * Areallycoolusername * (+11) /* G */ 15:55:54 [[User:Areallycoolusername]] https://esolangs.org/w/index.php?diff=65378&oldid=65181 * Areallycoolusername * (+11) /* Full List of languages I Made */ 15:57:10 -!- ais523 has quit (Remote host closed the connection). 15:58:23 -!- ais523 has joined. 15:59:16 -!- cpressey has quit (Quit: WeeChat 1.4). 16:13:32 ais523: "I was wondering if it would be possible to create some sort of infinitely active pattern" - Golly comes with a couple of patterns that are more or less like that. 16:14:10 yes, but it's hard to prove that they're like that and don't end up eventually chasing down and destroying themselves 16:14:16 True. 16:14:47 Anyway, I've been thinking about replicators in CAs. 16:15:26 There's von Neumann's CA with, what, 27 states? 16:15:48 quite a lot, yes 16:16:02 29 states. 16:16:07 So yeah, that's quite a lot. :D 16:16:08 the Game of Life has replicators but in general it seems like a really fragile system 16:16:26 like, a random block or glider somewhere can completely break a complex pattern and there's nothing you can really do about it 16:16:43 Right. 16:17:22 I've seen... three general categories of replicators, I think? 16:17:56 Or, rather, three categories of CAs with replicators. 16:18:17 There are "naturalistic" CAs like Conway's Life, and Star Wars or whatever. 16:18:36 Simple rules, huge and complicated replicators. 16:19:22 There are von Neumann-style CAs. Complicated rules, large and complex replicators, but not quite as bad as the Conway's Life ones. 16:19:46 Then there are Langton's Loops-style CAs. Medium-complexity rules, very simple replicators. 16:22:12 The upside of the naturalistic and VN-style CAs is that you can build universal constructors in both of them. 16:22:27 I don't think Langton's Loops supports universal construction. I don't know if it can even do computation. 16:23:58 Let's see, I'm trying to remember why LL requires a "sheath". You know what I'm talking about? 16:24:20 sort of, my memory of this is pretty vague 16:24:50 Similar to WireWorld, LL has "wires" that signals can go along. But unlike in WireWorld, the wires have to be surrounded on each side by a special state. 16:25:56 [[Braincells]] https://esolangs.org/w/index.php?diff=65379&oldid=65363 * HereToAnnoy * (+3169) Hopefully the spec is finished, just need execution examples and clarification. WIP 16:26:06 Hmmmm, here's one reason. In WireWorld, there's a particular state which indicates the tail of a pulse. In LL, on the other hand, the state for the tail of a pulse is identical to the background state. 16:26:40 So the "sheath" state is necessary to regenerate the wire in the wake of a pulse. 16:26:44 I wonder why it was designed that way. 16:27:55 maybe it increases the chance of the wires acting sensibly when they collide? 16:28:55 Well, let me look up how construction works in LL. 16:33:19 -!- ais523 has quit (Quit: quit). 16:34:37 I'm reading Langton's paper in which (I think) he describes LL. http://www-users.york.ac.uk/~gt512/BIC/langton84.pdf 16:36:11 He writes that in order for a CA pattern to be considered properly self-replicating (as opposed to being a pattern which merely "gets replicated" by the rule), it ought to contain some "code" which is both used to direct the replication process, and copied into the daughter pattern. 16:36:32 Which is awfully similar to the way that a traditional quine works. 16:36:38 that's a tricky definition to make precise 16:36:47 it's kind of like the question of whether viruses are alive 16:36:51 they need a certain environment to reproduce 16:36:55 but so does every lifeform 16:37:41 [[User:HereToAnnoy]] M https://esolangs.org/w/index.php?diff=65380&oldid=63533 * HereToAnnoy * (+101) Added [[Braincells]] to language list 16:38:49 Right. 16:39:25 I'm also reminded of a definition that someone on the Code Golf Stack Exchange site suggested for a "proper quine". 16:40:15 Which is that the program should contain at least one element which codes some *other* element of the program. 16:41:09 "Replicating cellular automata", like 1357/1357 or whatever it is, obviously don't allow you to create a replicator which satisfies that criterion, since each element of the mother pattern codes itself and only itself. 16:42:22 -!- b_jonas has joined. 16:42:51 I should probably write the inner loop in C or C++, with 64-bit floats 16:43:00 but I don't promise that I'll do it 16:43:28 what's it in now 16:43:57 cpressey: infinite initial condition for game of life can be useful because it lets you send signals at light speed, while otherwise you can only send signals (repeatedly) at half life speed 16:44:07 s/half life/half light/ 16:44:23 so yes, people do study that 16:45:13 -!- FreeFull has joined. 16:45:56 tswett[m]: Neumann's CA => http://esolangs.org/wiki/Von_Neumann%27s_29-state_cellular_automaton -- not that there's much info there 16:49:38 How to determine what time zone is used to display the recent changes in a MediaWiki service if not registering an account? 16:49:53 kmc: python, and I only have the bigint version, not one that does the inner loop with doubles and then checks near matches with bigints 16:53:50 zzo38: what page are you viewing on the wiki? 16:54:41 b_jonas: cython is a really easy way to rewrite an inner loop as native code 16:57:01 Somebody made a hexagonal-neighborhood loop replicator CA, which is an awfully good idea: https://www.youtube.com/watch?v=_kTMO7oEN8U 17:02:03 kmc: nah, I want to write it in C++, which I already know, and know enough to be able to figure out how to optimize it properly 17:02:11 ok 17:02:16 I won't try to learn some new tool for this 17:02:22 well, ctypes is also easy 17:02:27 but I recommend learning cython at some point 17:02:34 you'll get the basics in no time 17:02:42 but again, feel free to take a stab at the original problem too 17:03:15 you pretty much just put c type declarations on your python variables 17:03:36 Lemme see. So, in LL, a 7 signal is the signal which indicates that the arm should be extended by one cell. The sheath is the part that actually responds to the signal. You couldn't have the background state respond to the signal directly, because if you're a background cell and there's a 7 next to you, you don't know whether you're supposed to respond to it or not. 17:03:36 there's no need for anything like that here 17:14:58 b_jonas: The recent changes (but not the esolang wiki; it is a different one) 17:16:28 zzo38: so the Special:RecentChanges page in the default html view? 17:16:45 b_jonas: Yes. 17:17:28 [[Semordnilap]] https://esolangs.org/w/index.php?diff=65381&oldid=60021 * Orby * (-8) /* See also */ 17:25:50 Now I'm trying to figure out why, in Langton's Loops, two consecutive "4" signals are required in order to effect a turn, instead of just one. 17:30:09 zzo38: good question 17:31:37 [[Language list]] M https://esolangs.org/w/index.php?diff=65382&oldid=65377 * HereToAnnoy * (+17) /* B */ Added Braincells to the language list. 17:32:37 zzo38: I don't know a good answer. if api.php is enabled (it is not on Wikia), even for read only, then you can use that to query the recent changes in a different format, but that's not trivial 17:33:18 you can also use api.php to query the default timezone of the wiki, but I'm not sure if that's always the one used on recentchanges for unregistered users or something may override it 17:34:04 zzo38: you can try asking this question in a mediawiki-related channel on freenode, such as #mediawiki 17:36:48 -!- Phantom__Hoover has joined. 17:37:09 perhaps one of the SpecialPages also tells the default timezone? I dunno 17:39:23 https://www.mediawiki.org/wiki/Manual:Timezone says how to set the default timezone and that that's used in Special:RecentChanges, but doesn't say how to query 17:40:00 ah, got it 17:40:45 zzo38: view the page Special:GlobalPreferences#mw-prefsection-rendering and see what timezone it says there 17:41:01 under "Time zone:" 17:41:34 hmm no, that doesn't seem to wokr 17:41:53 not reliably at least 17:42:05 It says it is an invalid special page 17:45:17 zzo38: is the api.php enabled? if so, you can try loading /w/api.php?action=query&prop=info&meta=siteinfo&format=xmlfm&siprop=general|namespaces|namespacealiases|interwikimap|specialpagealiases|magicwords and see what it says in the timezone attribute of //general , but I'm not convinced that that's always right because I think there's multiple timezone settings 17:46:20 but if the api.php is enabled, then it's possible to query the recent changes with it 17:46:31 and you can compare the date there with the date in the html view 17:50:56 zzo38: look at /w/api.php?action=query&generator=recentchanges&list=recentchanges and compare its timestamps with the one in /wiki/Special:RecentChanges 17:51:32 and no, this won't work on wikia, or some other wikis where api.php is not enabled 17:53:14 OK, that works though on what I am trying to access. 17:54:34 good 17:55:12 https://www.mediawiki.org/wiki/API:Main_page has the docs for api.php in case you want to go more completely with that, eg. get the Recent Changes from only there rather than just eyeball the timezone 18:01:39 -!- MDude has quit (Ping timeout: 248 seconds). 18:06:06 [[Fit]] M https://esolangs.org/w/index.php?diff=65383&oldid=62321 * HereToAnnoy * (+743) Reduces from Boolfuck --> turing complete 18:13:20 [[Fit]] M https://esolangs.org/w/index.php?diff=65384&oldid=65383 * HereToAnnoy * (+1) Fixed typo : "(-v)+" ---> "(--v)+" 18:36:11 [[A1]] https://esolangs.org/w/index.php?diff=65385&oldid=59728 * Orby * (-5) /* See Also */ 18:48:35 yo 18:54:01 o/ 18:54:50 I wonder if I could come up with a cellular automaton that's kind of "in between" Codd's CA and Langton's Loops. 18:55:48 LL only has two commands: extend forwards and extend left. That's great as long as little square loopy replicators are the only thing you ever want to make. 18:57:04 Codd's CA has lots of commands that do lots of things, but a replicator in that CA is very complicated. 18:59:46 what's this "Codd's CA"? 19:04:06 https://en.wikipedia.org/wiki/Codd%27s_cellular_automaton 19:05:02 I'm trying to think how how LL might be extended to permit, say, a loop with a kink in it. 19:06:02 In LL, replication essentially consists of just executing the program four times. It does exactly the same thing the first three times (extend for a while and then turn left), and something different the last time (extend for a while, then collide, causing various interesting stuff to happen). 19:06:11 tswett[m]: wait, isn't that the same as Langton's loops? 19:06:28 they both have eight states on a square grid and make squares 19:06:43 No, Codd's CA came first. Note that the pictured loop doesn't replicate; it merely extends an arm forever. 19:07:08 hmm 19:07:25 If you tried to make "a Langton's loop" in Codd's CA, you'd find that the program to generate one side of the loop is too long to fit inside of the loop. 19:07:56 ok 19:07:59 Langton takes advantage of the fact that producing one cell of the loop requires a 3-cell instruction, but the program is executed 4 times, and 4 > 3. 19:08:00 there's a lot of loop rules in golly's sample directory 19:09:32 there's also a couple of similar-to-codd rules like devore 19:10:07 Yeah, the Devore rule is pretty much a strict improvement of Codd's rule. It's better in every way and lets you build a much, much smaller replicator. 19:11:40 I'm pondering this "loop with a kink in it" idea. You could do that with something very similar to a Langton's loop, if only you could somehow make it so that certain parts of the program are only executed some of the time. 19:14:48 So now I'm just thinking about how to accomplish that. 19:19:23 You'd want some way to store a finite state, and suppress some of the program some of the time depending on the state. 19:20:50 Hmmmmmmmm. I like the way my thoughts are going. :D 19:24:36 daisy daisy... ah no, that was HAL's mind going. 19:26:47 The mother loop can be totally passive, and merely send the program out over and over again. The construction arm can have a part on it that's a state machine and filter. 19:31:37 [[Post Dominos]] https://esolangs.org/w/index.php?diff=65386&oldid=60349 * Orby * (-5) /* See also */ 19:38:49 we are all merely codons within the mother loop 19:49:48 -!- lldd_ has joined. 20:10:45 -!- Lord_of_Life_ has joined. 20:14:07 -!- Lord_of_Life has quit (Ping timeout: 246 seconds). 20:14:08 -!- Lord_of_Life_ has changed nick to Lord_of_Life. 20:15:54 I think I've figured it out. I can do almost everything with 12 states. 20:21:18 [[Minaac]] https://esolangs.org/w/index.php?diff=65387&oldid=59930 * TheJebForge * (-120) /* Minaac */ 20:23:22 -!- lldd_ has quit (Quit: Leaving). 20:27:32 tswett[m]: implement it in GPU :-) 20:27:49 I'm gonna implement it in Golly. :D 20:43:59 ok 20:49:47 I have no idea how fractran works 21:07:42 a positive natural is a multiset of primes hth 21:25:25 do you mean like 11111111111115397052046616165917913561809835753472 ? is that a multiset too? 21:34:20 `factor 11111111111115397052046616165917913561809835753472 21:34:21 factor: ‘11111111111115397052046616165917913561809835753472’ is too large 21:34:27 Apparently it's too large to be a multiset. 21:36:37 I see 21:37:41 learn BLJ is a move that lets you solve NP-complete problems with no stars and just one key. 21:55:49 -!- b_jonas has quit (Remote host closed the connection). 22:35:40 Retroforth's case statement causes the function that's using it to exit if the condition is met 22:39:17 so, know anything interesting about cellular automata on random graphs? 22:39:30 seems like you could model some social behaviors that way 22:40:46 -!- xkapastel has joined. 22:52:07 -!- Phantom__Hoover has quit (Ping timeout: 245 seconds). 23:02:03 It's the brain. 23:02:20 kmc: Golly. I've never even thought about cellular automata on non-planar graphs. 23:06:01 -!- FreeFull has quit. 23:19:55 well I guess you're about ready then! 23:22:54 i'm ready for fluffy cat whiskers 23:22:57 is that on the menu 23:31:24 maybe 23:42:20 I successfully made a loop that extends itself. Woo. 23:43:35 int i = 0; while(1) { this_block.append("print(%d);", i); i++; } 23:44:04 At some point I should make a new implementation of Braintrust. Except the best language for that is probably either Common Lisp or maybe some ... assembly like thing? And I have no interest in Common Lisp these days 23:44:25 Why not do it in ALGOL 68? 23:44:34 I think ALGOL 68 is quite an interesting language. 23:44:52 It has features not present in many or any modern languages. 23:45:09 And certainly in languages that existed in 1968. 23:45:13 Does ALGOL 68 have functionality to preserve the current state as an executable that will run another function when started? 23:45:42 I imagine think you could dump memory to a file and load it back up later. 23:46:06 https://ccl.clozure.com/manual/chapter4.9.html 23:46:48 Or maybe I could just... directly copy+modify the current executable, if there's specific data in the executable in a predictable location 23:47:05 That seems like a reasonable approach. 23:47:12 `tmp/out.a 23:47:13 finally 23:47:26 `` xxd tmp/out.a > tmp/out.a.xxd 23:47:26 ​/hackenv/bin/`: line 5: xxd: command not found 23:47:58 `` hd tmp/out.a > tmp/out.a.hd 23:47:58 No output. 23:48:01 `url tmp/out.a.hd 23:48:04 https://hack.esolangs.org/tmp/out.a.hd 23:48:04 I'm wondering if that's sort of like cheating, to compile into an interpreter and call it a "compiler" 23:49:24 [[User talk:A]] M https://esolangs.org/w/index.php?diff=65388&oldid=65373 * A * (+287) /* Concern */ 23:57:03 Sgeo_: that's one of the futamura projections. a compiler is a curried interpreter 23:57:10 you provide the program and at some point later you provide the program's input 23:57:21 so the question is just how much optimization takes place at the earlier point