00:07:46 <b_jonas> [ `:11111111112779092073579732177590915891200000000000x
00:07:47 <j-bot> b_jonas: `:11111111112779092073579732177590915891200000000000x
00:07:52 <b_jonas> [ q:11111111112779092073579732177590915891200000000000x NB. int-e
00:07:53 <j-bot> b_jonas: 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 5 5 5 5 5 5 5 5 5 5 5 7 7 7 7 7 7 7 11 11 11 11 11 11 13 13 13 13 13 13 13 13 13 17 17 19 23 23 23
00:10:15 -!- nfd9001 has quit (Read error: Connection reset by peer).
00:12:01 -!- salpynx has joined.
00:13:24 <b_jonas> int-e: ^ that's the best if you use primes no greater than 23
00:13:34 <b_jonas> I'm running a longer search now
00:14:00 <b_jonas> well, as soon as I fix the bugs in my program
00:14:57 <b_jonas> [ q:11111111111269581656547160489766631945078430800000x
00:14:57 <j-bot> b_jonas: 2 2 2 2 2 2 2 3 3 3 3 3 5 5 5 5 5 7 7 7 7 7 7 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 13 13 13 13 13 13 13 13 19 19 19 19 19 19 19 29
00:16:44 <b_jonas> int-e: may I ask how good a number you've got, that is, (the the best number you have minus (2**50-1)/9) rounded to two significant digits?
00:18:23 <b_jonas> Hooloovo0: http://esolangs.org/logs/2019-08-02.html#lY
00:20:39 <Hooloovo0> dammit I missed a discussion of TI-8x grayscale
00:22:32 <Hooloovo0> the display (or at least the controller) doesn't suport any more than black and white, you have to flicker the pixels fast enough that you trick the eye into seeing gray
00:23:40 <Hooloovo0> I too wrote a mandelbrot renderer in BASIC and it took a similarly long amount of time to render
00:23:58 <Hooloovo0> I wonder if you could flash like 8 pictures in BASIC to get flickerless grayscale...
00:25:33 <Hooloovo0> also sdcc for the z80 isn't super good
00:26:08 <Hooloovo0> the only compiled language which is half-decent forTI-z80 is AXE
00:29:29 <b_jonas> oh come on, stupid program, find a better solution
00:31:10 <b_jonas> [ 11111111111269581656547160489766631945078430800000 - (9<.@%~_1+10x^50)
00:31:11 <j-bot> b_jonas: 1.58473e38
00:31:14 <kmc> Hooloovo0: it wasn't flickerless
00:32:43 <b_jonas> [ 0j_2": 11111111111269581656547160489766631945078430800000x - (9<.@%~_1+10x^50)
00:33:23 <b_jonas> ok, I think I'll leave this running for a while
00:35:59 <b_jonas> [ q:11111111111161923559652900718659162521362304687500x
00:36:00 <j-bot> b_jonas: 2 2 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 7 7 7 7 7 7 11 13 13 13 17 17 17 17 17 17 23 31 59
00:36:21 <b_jonas> [ 0j_2": 11111111111161923559652900718659162521362304687500x - (9<.@%~_1+10x^50)
00:41:48 -!- douglas_ has joined.
00:42:53 -!- douglas_ has quit (Remote host closed the connection).
01:37:46 -!- FreeFull has quit.
01:54:26 -!- xkapastel has quit (Quit: Connection closed for inactivity).
01:57:31 <b_jonas> [ q:11111111111111167179461296463398102111816406250000x
01:57:32 <j-bot> b_jonas: 2 2 2 2 3 3 3 3 3 3 3 3 3 3 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 11 11 11 11 13 13 13 13 13 17 19 19 23 23 23 23 23 23 23 23 53 61 73
01:57:35 -!- oerjan has joined.
01:57:55 <b_jonas> [ 0j2": 11111111111111167179461296463398102111816406250000x - (9<.@%~_1+10x^50) NB. int-e
01:57:56 <j-bot> b_jonas: 56068350185352286991000705295138889.00
01:58:06 <b_jonas> [ 0j_2": 11111111111111167179461296463398102111816406250000x - (9<.@%~_1+10x^50)
02:37:11 -!- rodgort has quit (Quit: Leaving).
02:46:42 -!- rodgort has joined.
02:49:42 -!- oerjan has quit (Quit: leaving).
02:57:30 -!- rodgort has quit (Ping timeout: 272 seconds).
02:58:28 -!- rodgort has joined.
03:16:29 <Hooloovo0> what kind of optimization algorithm are you using b_jonas?
03:21:48 <esowiki> [[Adjudicated Blind Collaborative Design Esolang Factory]] M https://esolangs.org/w/index.php?diff=65368&oldid=65364 * A * (+18) At least this is created in 2006.
04:37:59 <esowiki> [[Adjudicated Blind Collaborative Design Esolang Factory]] M https://esolangs.org/w/index.php?diff=65369&oldid=65368 * Salpynx * (-18) Undo revision 65368 by [[Special:Contributions/A|A]] ([[User talk:A|talk]]) Yes, but [[Category:Years]] "All languages should belong to exactly one of these categories, and other articles should not."
06:14:03 <int-e> b_jonas: that's pretty good
06:14:19 <int-e> b_jonas: 11111111111111111400018389711831195436675393750000 is the best I've got
06:15:20 <int-e> 103-smooth overapproximations of 11111111111111111111111111111111111111111111111111
06:17:27 <int-e> A four-letter word.
06:17:29 <shachaf> I'm looking for clues in the text and not seeing them.
06:17:49 <int-e> @google "B-smooth"
06:17:50 <lambdabot> https://en.wikipedia.org/wiki/Smooth_number
06:18:19 <shachaf> Why would you use the word "smooth" to talk about natural numbers? Come on.
06:18:30 * kmc is a smooth operator
06:18:53 <int-e> shachaf: because we also use the word "round" for natural numbers
06:19:25 <int-e> and round numbers tend to be smooth
06:19:31 <shachaf> you gotta tell me if you're a c^op
06:19:51 <kmc> is it opposite day
06:20:42 <int-e> > 2^15*3^10*7^7 -- smooth, not round
06:22:11 <Hooloovo0> it's not round in base 10 because it doesn't have any 5s
06:22:13 <int-e> b_jonas: oh, actually 11111111111111111215673466122483207680856045806875 is the best I have.
06:22:52 <kmc> smooth buddy
06:43:32 <shachaf> I wonder whether an ILP solver would be good at this kind of thing.
06:44:06 <shachaf> I guess probably not very?
06:50:45 <Hooloovo0> I looked into approximate subset sum solvers but they mostly suck
07:08:59 <Taneb> Girl Genius theory: the entire thing's going to be a shaggy dog story, with Agatha, Gil, and Tarvek being unable to claim their titles due to having died in Castle Heterodyne during the Si Vales Valeo procedure
07:09:47 <shachaf> can sat solvers use a trick like that somehow
07:15:48 -!- john_metcalf has joined.
07:22:30 <int-e> shachaf: Uh, they are?
07:23:00 <int-e> You only have two values. You cut off branches that are definitely false.
07:23:25 <int-e> Unit propagation is a combined branch & bound.
07:23:39 <int-e> It's all so degenerated though that the concept hardly fits.
07:26:59 <shachaf> Do SMT solvers use it more directly?
07:27:08 <shachaf> Maybe when used for optimization rather than satisfiability.
07:27:21 <shachaf> Maybe that's pretty far from SAT territory.
07:29:56 <int-e> I don't know whether the LIA solvers prefer gomory cuts or branch&bound.
07:30:20 <int-e> (LIA = linear integer arithmetic)
07:31:22 <shachaf> Hmm, maybe lookahead SAT solvers are a bit closer to the kind of thing I was thinking of.
07:31:40 <shachaf> It's not really a bound, though.
07:34:10 <b_jonas> [ 0j_2": (9<.@%~_1+10x^50) -~ 11111111111111111400018389711831195436675393750000
07:34:13 <int-e> afaiui lookahead is just another heuristic for selecting promising decisions.
07:34:34 <b_jonas> [ 0j_2": (9<.@%~_1+10x^50) -~ 11111111111111111215673466122483207680856045806875x
07:34:43 <b_jonas> [ q: 11111111111111111215673466122483207680856045806875x
07:34:43 <j-bot> b_jonas: 3 3 3 3 3 5 5 5 5 7 7 7 7 7 11 11 11 11 13 13 13 13 13 13 13 13 17 19 31 37 37 37 37 43 47 47 47 67 67 97 97 103
07:34:58 <shachaf> Yes, it's a different thing.
07:35:19 <b_jonas> I'll try to run this search with different parameters on a faster machine
07:37:47 <b_jonas> shachaf: http://esolangs.org/logs/2019-08-02.html#lY
07:40:57 <int-e> b_jonas: http://paste.debian.net/1094920/ has some more results and runtime on a i7-6850K (3.6GHz, single-threaded, written in Haskell, exact arbitrary precision integer arithmetic)
07:42:06 <b_jonas> int-e: I don't want to look at spoilers yet
07:42:15 <int-e> b_jonas: it's only numbers and runtimes
07:43:15 <int-e> (And spoils the fact that I have tried two different approaches... I don't really think that's a spoiler :) )
07:51:08 -!- b_jonas has quit (Quit: leaving).
08:12:42 -!- Lord_of_Life has quit (Ping timeout: 244 seconds).
08:14:07 -!- Lord_of_Life has joined.
08:14:24 -!- Phantom__Hoover has joined.
08:26:35 <shachaf> Is "rep ret" necessary only when the ret is on an even address?
08:26:43 <shachaf> Also is it necessary at all nowadays or only for old AMD CPUs?
08:26:52 <shachaf> Or maybe I mean odd. Whichever one I mean.
08:29:11 <int-e> "Continuing in the following generation of AMD CPUs, Bulldozer, we see that any advice regarding ret has disappeared from the optimization guide."
08:29:23 <int-e> according to http://repzret.org/p/repzret/
08:29:55 <int-e> Obviously the AMD manuals are the authorative source.
08:31:15 <shachaf> Hmm, I generally only look at the Intel manuals. I guess I should read the AMD ones too.
08:31:23 <shachaf> (Not that I look at the optimization guide hardly ever.)
08:31:45 <shachaf> when are people going to scrap x86 twh hand
08:32:00 -!- cpressey has joined.
08:32:20 <int-e> maybe when Intel goes out of business.
08:33:39 <shachaf> I imagine Intel could do pretty well at other architectures if it came to it?
08:34:19 <int-e> but why would they switch away from x86
08:35:00 <shachaf> I think "the world switching away from x86" is more likely than "Intel going out of business"
08:35:13 <shachaf> Though maybe not. Companies can be ephemeral.
08:35:23 <int-e> I'll believe it when it happens.
08:35:44 <shachaf> The world's most popular operating system is already almost exclusively ARM.
08:38:05 <int-e> \any architecture with LLVM support is viable these days.
08:38:27 <int-e> So the popularity of ARM is still no reason for Intel to switch away from x86.
08:38:39 <shachaf> If I write a compiler should I target LLVM?
08:39:51 <shachaf> Hmm, there were a few things where I wasn't sure LLVM could really do the things I want.
08:40:10 <myname> x86 is still huge for gaming
08:41:54 <shachaf> Are there any standards like calling conventions or whatever for software that wants to be sure to avoid stack overflow?
08:42:04 <shachaf> For example, a pointer to the end of the stack that it can check.
08:44:41 <shachaf> Presumably programs would like to fail better than a SEGV.
08:45:10 <int-e> Meh you're so hard to please.
08:45:36 <shachaf> For example to guarantee success before starting a computation rather than crashing in the middle.
08:45:58 <shachaf> Recursion is the only case where you might need a dynamic check.
08:48:59 <shachaf> Also, are there any clues for why the REX bits are called W R X B?
08:48:59 <cpressey> shachaf: You want to be able to call code from external libraries and you want to be sure to avoid stack overflow?
08:49:08 -!- wob_jonas has joined.
08:49:28 <shachaf> cpressey: Ideally I'd like this to work across library boundaries, yes.
08:51:27 <cpressey> shachaf: You do seem to be asking a lot
08:53:31 <wob_jonas> shachaf: I don't know about the ret instructions specifically, but you should look them up in the optimization manuals for your target cpu at "https://software.intel.com/en-us/articles/intel-sdm" and AMD, and in Agner's optimization manuals at "http://www.agner.org/optimize/" if you care
08:54:17 <shachaf> cpressey: I guess my wisdom entry is correct tonight.
08:54:20 <cpressey> shachaf: OK, so I have this computation, and in the middle it loads a shared object and calls the symbol `foo` in it. You want to guarantee this will not overflow the stack. You want this guarantee *before starting it*.
08:54:41 <cpressey> That's all I mean by asking a lot
08:55:07 <shachaf> cpressey: If all your functions are non-recursive and non-indirect, this can just be in the type of foo.
08:55:27 <shachaf> I guess it's a problem with shared libraries but shared libraries aren't so great in the first place.
08:55:39 <cpressey> If you have control over foo and access to information about it in the compiler, just track the stack size in the compiler, you don't need dynamic checks
08:55:52 <cpressey> If you don't have control over foo, all bets are off
08:56:01 <shachaf> You need something like dynamic checks if you want to support recursion.
08:56:38 <cpressey> OK so you have a general recursive function and you want a guarantee *before calling it* that it will terminate
08:56:38 <shachaf> Besides, you at least need a dynamic check at program startup or something.
08:56:49 <shachaf> No, I want it to be able to fail gracefully.
08:57:04 <shachaf> For example maybe I only want to support bounded recursion where it can decide on the bound up-front.
08:57:59 <shachaf> It's true that I hadn't thought carefully about dynamic libraries, they're kind of tricky because they're all indirect jumps.
08:58:32 <cpressey> Then your "calling convention" is to keep track of the recursion count and "fail gracefully" (however you define that) if the call would mean the count is exceeded
08:58:33 <shachaf> You can have a standard calling convention where dynamic calls are guaranteed 8kB or whatever of stack space, and if they want more than that they can do a dynamic check.
08:58:52 <shachaf> Right. There are a lot of things that are more graceful than SEGV.
08:59:23 <Taneb> shachaf: that would make it tricky to have dynamic calls which make dynamic calls
08:59:24 <wob_jonas> "why the REX bits are called W R X B" => R for "register" because it extends the field that usually specifies a register operand (though sometimes chooses between instrs), X for "index" because it extends the field that gives the (scaled) index register for memory operands, B for "base" because it may extends the field that gives the base register
08:59:25 <wob_jonas> (though may also extend the other register operand for reg-reg instructions), and W for "word" because it can determine word size between 32 and 64 bits
08:59:45 -!- relrod has quit (Ping timeout: 268 seconds).
09:00:00 <wob_jonas> of course sometimes some of those mnemonics are meaningless, because in some instructions some of the bits are ignored or must be zero
09:00:03 <shachaf> Taneb: Hmm, maybe. How frequent is that?
09:00:28 -!- relrod has joined.
09:00:30 -!- relrod has quit (Changing host).
09:00:30 -!- relrod has joined.
09:01:28 <shachaf> I think dynamic linking is mostly a bad idea for many reasons, but this one can go on the list.
09:01:47 <wob_jonas> I think dynamic linking is mostly a good idea
09:02:44 <wob_jonas> imagine having to reinstall every fricking executable on my debian whenever some bug is fixed in one of the frequently used libraries that is currently a shared library
09:03:12 <wob_jonas> you may still want to link some things statically of course
09:03:33 <shachaf> What if the bug fix requires an API change?
09:04:42 <cpressey> That's quite rare, in my experience
09:06:21 <shachaf> I feel like you're describing an infrequent case with a relatively small benefit, though I don't know.
09:06:28 <int-e> shachaf: You really have to convince people... C/C++ users foremost... that failing more gracefully than producing a segmentation fault is actually desirable and useful. For that, you have to figure out what that more graceful way would be, especially in the context of C (C++ has exceptions, arguably that makes it easier to do something useful.)
09:06:32 <shachaf> This only really matters for security bugs probably.
09:06:55 <shachaf> int-e: Returning an error?
09:07:08 <int-e> returning an error from where?
09:07:18 <shachaf> The call that ran out of stack space.
09:07:25 <int-e> I'm invoking a void foo()
09:07:35 <shachaf> If foo can fail its return type shouldn't be void.
09:07:59 <int-e> I don't think you'll convince anybody that way.
09:08:00 <cpressey> If you "fail gracefully" remember to clean up all the resources you allocated... and hope the cleanup code doesn't also need to "fail gracefully"
09:08:16 <shachaf> This sort of thing seems like a basic requirement for reliable software?
09:08:28 <int-e> Nobody's going to go over millions of lines of code base with void functions and change them.
09:08:36 <shachaf> Oh, I only mean for new programs.
09:08:41 <int-e> (should write billions)
09:08:52 <wob_jonas> no, it really depends on what software it is
09:08:56 <shachaf> Existing software can keep using guard pages if you want, I just want my programs to be reliable.
09:09:12 <wob_jonas> in most cases, in programs I write, most errors don't have to be handled gracefully, I don't have to clean up anything, just print an error message and exit
09:09:30 <wob_jonas> then I find the very few actual errors that will happen often, and handle those
09:09:30 <int-e> Well for your own programming language you can define your own ABI.
09:09:43 <wob_jonas> this is because I mostly write programs for research, not for production
09:09:45 <shachaf> cpressey: I mean, sure, these are problems, but they're problems you already have to solve to write reliable and resilient software.
09:09:55 <shachaf> int-e: I will! I'm just wondering whether this exists anywhere.
09:10:00 <int-e> (Did you know that LLVM knows about GHC's calling convention?)
09:10:00 <wob_jonas> so only I run them, and I will edit and rerun them when they hit a case that I don't handle
09:10:13 <wob_jonas> it would take four times as long if I tried to handle every possible error gracefully
09:10:23 <cpressey> shachaf: Erlang promotes writing reliable and resilient software by *expecting* that processes will sometimes crash
09:10:38 <shachaf> int-e: Here's an example of a bug that came from not having clearly defined stack bounds: https://marcan.st/2017/12/debugging-an-evil-go-runtime-bug/
09:10:47 <int-e> (I didn't up to a short while ago. I have not checked the details.)
09:10:51 <wob_jonas> and most of those errors are either detecting internal logic bugs in my program, or warning me that the input has something that I thought it didn't have and so I don't have code to read it properly
09:10:59 <cpressey> That seems far simpler that trying to guess all the ways they might crash and all the cases you need to would need to handle to make them "fail gracefully"
09:11:00 <shachaf> cpressey: But it doesn't expect individual function calls to crash.
09:11:20 <shachaf> Certainly the thing you're talking about is important for building a large resilient system.
09:11:34 <shachaf> But it's not going to be a fractal system where you expect every instruction to potentially fail.
09:11:34 <wob_jonas> cpressey: exactly, which is why I don't have to do cleanup for the errors in particularly, because I expect they can fail in ways that I don't expect so whatever that causes I have to be able to clean up too
09:11:37 <cpressey> shachaf: Individual function calls can raise an exception, which, if unhandled, causes the process to crash
09:11:52 <shachaf> Exceptions are almost certainly a bad idea for reliable software.
09:12:04 <shachaf> (And probably for other software?)
09:12:16 <wob_jonas> so I store the data on the file system in such a way that I can recover from the state I can get from a crashed process or power failure
09:12:38 <wob_jonas> shachaf: only _handling_ an exception is usually a bad idea
09:12:38 <int-e> shachaf: exceptions solve the API problem presented by functions that cannot return an error condition.
09:12:43 <shachaf> int-e: I did know that they had something though not the exact details. I was under the impression it was some small subset of the GHC calling convention.
09:12:50 <wob_jonas> raising an "exception" that's actually a fatal error that you can't catch is not a bad idea
09:13:18 <shachaf> int-e: They don't, because if you want your software to be reliable you need to be able to reason about all the ways control flow could go.
09:13:37 <shachaf> So "void foo();" only makes the control flow harder to see.
09:13:46 <int-e> shachaf: I really don't know details. I was surprised to find that LLVM knows anything about GHC at all.
09:13:58 <cpressey> shachaf: I can always walk into the server room and turn the computer off. How do you reason about that "flow control"?
09:14:06 <int-e> (It does make sense. But I didn't expect it.)
09:14:15 <cpressey> shachaf: Maybe more to the point, you need a definition of "reliable".
09:14:50 <int-e> shachaf: Arguably the proper way to handle stack overflows is to enlarge the stack.
09:15:41 <int-e> And the proper way to handle OOM is to pause the program until an engineer adds more memory to the computer.
09:15:42 <shachaf> I mean, you could say it's true forever, but now you need to handle out-of-memory errors on every function call.
09:15:55 <shachaf> What if your computer is in space or something?
09:16:51 <cpressey> shachaf: I've heard of aerospace engineering teams being forbidden to use recursive code.
09:16:54 <int-e> I suppose for embedded systems you basically disallow recursion.
09:17:15 <cpressey> And if you can't recurse, you don't even need a stack, really
09:17:42 <int-e> Or, at least, you can statically determine how much of it you'll need.
09:18:13 <int-e> shachaf: how do you fail gracefully in that context?
09:18:26 <int-e> shachaf: space crafts
09:18:34 <shachaf> Oh, you probably disallow recursion.
09:19:21 <int-e> I guess you have non-mission-critical stuff which you can afford to process on a best effort basis.
09:20:57 <shachaf> It seems to me like it's easy to turn a system that's reliable and predictable into one that isn't (e.g. by adding useful features), but it's impossible to go the other way.
09:21:50 <shachaf> I certainly think it's reasonable for a program to want to allocate all its memory up front so it can avoid malloc errors.
09:22:02 <shachaf> (Not that that's realistic on Linux.)
09:22:54 <cpressey> shachaf: I was serious when I said you should define "reliable".
09:23:17 <cpressey> It's much easier to hit an explicitly identified target.
09:24:13 <shachaf> I don't have a comprehensive definition or anything. I guess there are things like https://en.wikipedia.org/wiki/MISRA_C
09:24:48 <shachaf> It seems easy enough to point out a particular way that software could fail and say that eliminating it would make it more reliable.
09:25:14 <cpressey> shachaf: Are you making up a new language for this, or an existing one, or slightly modifying an existing one?
09:25:59 <shachaf> Let's say making up a new language.
09:26:05 <shachaf> But also wondering about existing systems?
09:32:29 <cpressey> If you're making up a new language then you have a lot more freedom to design it to try to solve some of the problems at the language level.
09:33:40 <shachaf> I'm thinking: End of stack pointer is stored somewhere and available for checking for dynamic cases.
09:33:52 <cpressey> To go back to what you said about bounded recursion, you could make the language only allow bounded recursion, instead of checking if a function exceeds some recursion bound at runtime
09:34:06 <shachaf> Function types are annotated with maximum stack usage, so if the call graph is acyclic everything can be solved statically.
09:34:35 <shachaf> Only allowing bounded recursion seems like a pretty strong constraint.
09:35:12 <shachaf> I mean, especially if it has to be statically bounded (rather than e.g. bounded by an argument).
09:35:22 <cpressey> So you want to allow unbounded recursion but you also want it to be reliable
09:35:53 <int-e> He wants a Turing Machine... well okay... a RAM machine.
09:36:43 <HackEso> Queen Shachaf of the Dawn sprø som selleri and cosplays Nepeta Leijon on weekends. He hates bell peppers with a passion. He doesn't know when to stop asking questions. We don't like this.
09:36:55 <int-e> Queen/He. Okay, mystery solved.
09:37:33 <shachaf> I want to allow programmers to do what they want, and make it easy to do the reasonable thing.
09:38:27 <shachaf> I think dynamic checks are pretty reasonable. Say you have a recursive function to search a tree or something, and you run out of stack space. You return an error. That seems fine to me?
09:38:55 <HackEso> 11837:2019-06-17 <b_jonäs> `` perl -pi -e\'s/Bond\\K/, Queen Elizabeth the first/\' wisdom/p*aneb* \ 9872:2016-12-05 <boil̈y> slwd shachaf//s/^/Queen / \ 7390:2016-04-14 <hppavilion[1̈]> le/rn victoria/Queen Victoria is the most victorious queen the world has ever known, even having won at the not dying contest.
09:40:34 <shachaf> All the things I'm saying are kind of exploratory, but this is my general attitude. I don't want to make it impossible to avoid these things, I just want the mechanism to be available.
09:41:11 <shachaf> perl -pi -e presumably makes perl generate a position-independent executable
09:41:51 <wob_jonas> shachaf: presumably, since this is x86_64 so executablesa re position-independent by default
09:44:07 <shachaf> But perl doesn't normally generate an executable in the first place.
09:45:07 <shachaf> I don't know why I'm a queen but I guess I'll take it.
09:46:14 <HackEso> wisdom/people who taneb is not
09:46:28 <int-e> shachaf: seems you have fungot to thank for that one
09:46:29 <shachaf> `? people who taneb is not
09:46:30 <HackEso> elliott, a rabbi, Mark Zuckerberg, James Bond, Queen Elizabeth the first. Pending approval: Shigeru Miyamoto.
09:46:54 <int-e> shachaf: https://esolangs.org/logs/2016-12-05.html#lRl
09:46:56 <shachaf> Taneb: Which rabbi are you not?
09:47:35 <Taneb> shachaf: the one from Fiddler on the Roof
09:47:50 <Taneb> Shouldn't you be in bed right now
09:48:21 <shachaf> But a few days ago I sang some of the songs from it, in Bodega Bay.
09:48:22 <Taneb> Neither have I, but I was once in a production of it
09:48:27 <HackEso> 992) <Taneb> I've also pretended to be Queen Elizabeth the first, but that was a desperate plea for attention
09:48:30 <Taneb> I played the rabbi
09:48:57 <Taneb> wob_jonas: I can picture exactly where I was when I pretended to be Queen Elizabeth the first
09:49:02 <HackEso> 1333) <shachaf> `unidecode ⧸🙼 <HackEso> [U+29F8 BIG SOLIDUS] [U+1F67C VERY HEAVY SOLIDUS] <shachaf> it is with a very heavy solidus that i write to inform you that unicode has too many code points
09:49:38 <Taneb> Does `quote special case numbers? or does it just find all the quotes that mention the number
09:49:49 <HackEso> 1) <Slereah> EgoBot just opened a chat session with me to say "bork bork bork"
09:50:19 <HackEso> /srv/hackeso-code/multibot_cmds/lib/limits: line 5: exec: `which: not found
09:50:26 <HackEso> #!/bin/sh \ allquotes | if [ "$1" ]; then \ if expr "$1" + 0 >/dev/null 2>&1; then \ sed "$1q;d" \ else \ grep -P -i -- "$1" \ fi \ else shuf --random-source=/dev/urandom -n 1; fi
09:51:10 <shachaf> What's shuf's normal random-source?
09:51:12 <Taneb> Yeah, that looks like it's special casing numbers
09:51:39 <shachaf> `` allquotes | strace -fo tmp/OUT shuf -n 1
09:51:39 <HackEso> 326) <Sgeo> Hmm. I guess it's nearby GRBs that would be problematic? <Phantom_Hoover> Sgeo, if by 'problematic' you mean 'what's that in the AAAAAAAAARRRRRGGGGHHHH'.
09:51:42 <HackEso> https://hack.esolangs.org/tmp/OUT
09:52:18 <HackEso> 1/1:9771:2016-11-24 <oerjän> sled bin/quote//s,shuf,shuf --random-source=/dev/urandom, \ 978:2012-12-09 <Gregör> revert \ 977:2012-12-09 <shachäf> cp bin/quote bin/realquote; echo -n $'#!/bin/sh\nsleep 1\nrealquote "$@"\n' > bin/quote \ 0:2012-02-16 Initïal import.
09:52:45 <int-e> hah. perhaps there was an older version of `shuf` that used /dev/random instead?
09:53:12 <shachaf> Taneb: I heard there are questions that cross your eyes when posed. Is that true?
09:54:44 <shachaf> Looking at the logs from that time, oerjan was spreading false rumors about /dev/urandom.
09:54:52 <Taneb> shachaf: I believe so. "Can you cross your eyes?" might make me cross my eyes when posed
09:55:08 <HackEso> 124) <alise> I love logic, especially the part where it makes no sense.
09:55:19 <HackEso> 64) Note that quote number 124 is not actually true.
09:56:09 <HackEso> 124) <alise> I love logic, especially the part where it makes no sense.
09:56:12 <HackEso> 124) <alise> I love logic, especially the part where it makes no sense.
09:56:14 <HackEso> 141) <oklopol> comex: what? <oklopol> *vorpal <oklopol> comex: hi, tab-complete completed c to comex instead of Vorpal, dunno why \ 237) <oklopol> okay see in my head it went, you send from your other number smth like "i'd certainly like to see you in those pink panties again" and she's like "WHAT?!? Sgeo took a pic?!?!?! that FUCKING PIG"
09:56:34 <HackEso> 124) <alise> I love logic, especially the part where it makes no sense.
09:57:45 -!- atslash has joined.
09:58:41 <shachaf> I guess dynamic libraries calling other dynamic libraries is actually reasonably common.
09:59:15 <shachaf> If the whole system was built with this thing in mind, you could maybe do something fancy during relocation.
09:59:20 <shachaf> But that's almost certainly a bad idea.
09:59:35 <shachaf> Instead you should just ban dynamic libraries.
10:00:34 <shachaf> `fetch http://slbkbs.org/tmp/out.a tmp/out.a
10:00:39 <HackEso> http:/slbkbs.org/tmp/out.a: No such file or directory
10:00:45 <shachaf> `fetch tmp/out.a http://slbkbs.org/tmp/out.a
10:00:47 <HackEso> 2019-08-08 10:00:46 URL:http://slbkbs.org/tmp/out.a [923/923] -> "tmp/out.a" [1]
10:00:55 <HackEso> /srv/hackeso-code/multibot_cmds/lib/limits: line 5: /hackenv/tmp/out.a: Permission denied \ /srv/hackeso-code/multibot_cmds/lib/limits: line 5: exec: /hackenv/tmp/out.a: cannot execute: Permission denied
10:01:33 <HackEso> tmp/out.a: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, not stripped
10:02:27 <wob_jonas> shachaf: the system does do a lot of fancy things during relocation already. resolves strong symbols overriding weak symbols, has indirect function symbols that are resolved everywhere the first time they're called, etc
10:03:07 <shachaf> I want to generate a program that uses dynamic libraries, because that's the only way you can make software nowadays apparently.
10:03:17 <shachaf> So I just want it to do the minimum possible.
10:03:34 <wob_jonas> shachaf: you know you can mix and match stuff, link some libraries statically and some dynamically
10:03:48 <shachaf> Yes. But some things you have to link dynamically.
10:04:36 <shachaf> On Linux I think it's pretty much only OpenGL and Xlib, and their dependencies such as libc.
10:05:20 <shachaf> On Windows system calls are only available via dynamic linking.
10:14:52 <shachaf> By the way, one reason I care about this stack usage thing is that it's relevant for implementing efficient coroutines.
10:18:41 <wob_jonas> shachaf: you probably already know this, but ais523 is the person to ask about this
10:19:27 <shachaf> Except for the part where ais523 isn't here right now.
10:24:36 <cpressey> In a desktop context, if a program starts using excessive amounts of stack space, what I'd like to see is the OS staying responsive, so that I can find the process and kill it.
10:25:13 <shachaf> Yes, that's a good OS feature.
10:25:22 <shachaf> It's pretty ridiculous how broken things are.
10:25:23 <cpressey> The program shouldn't be responsible for deciding what "excessive amounts" are, so it's an OS-domain thing.
10:27:27 <wob_jonas> cpressey: you can set a soft ulimit, in which case the program gets a signal
10:27:49 <shachaf> That seems like a point about memory consumption in general (of which "the stack" should only be a tiny fraction).
10:28:05 <shachaf> Of course stacks are just regular memory.
10:28:14 <wob_jonas> shachaf: there's a specific setrlimit for stack space
10:29:17 <shachaf> Sure, but you can put your stack pointer wherever you want.
10:30:07 <shachaf> The limit only applies to the "process stack" and not thread stacks anyway, I think.
10:38:34 -!- Phantom__Hoover has quit (Ping timeout: 272 seconds).
10:40:05 -!- john_metcalf has quit (Quit: http://corewar.co.uk).
10:42:27 <int-e> salpynx: https://en.wikipedia.org/w/index.php?title=Iota_and_Jot&diff=909854762&oldid=909568636 -- I didn't know that "comprised of" was so difficult :-/
10:46:40 <shachaf> I don't know what it means to compose sequences, so that seems less clear to me.
10:49:23 <int-e> https://en.wikipedia.org/wiki/User:Giraffedata/comprised_of <-- that user is on a vendetta against this usage.
10:50:06 <cpressey> Using that word correctly always sounds wrong to me, so I just avoid it entirely
10:50:28 <int-e> But the new phrasing isn't accurate either.
10:51:12 <shachaf> It's true that "comprise" is more often used in the reverse sense.
10:52:37 * int-e rephrases to "consisting of"
10:52:45 <wob_jonas> shachaf: frankly I don't know how it works, but my guess is that it applies to the total of all stacks
10:53:01 <wob_jonas> the kernel knows which mappings are stacks because they're set to auto-grow downwards
10:53:07 <shachaf> That would certainly be unexpected to me.
10:53:21 <shachaf> What does it mean for stacks to auto-grow downward?
10:53:35 <shachaf> Compared to memory which is mapped and gets faulted in on demand.
10:53:41 <shachaf> Do you mean stack memory gets mapped on demand?
10:53:54 <wob_jonas> that if you write in a part near the bottom of the stack, its size is extended
10:53:58 <wob_jonas> but I might be completely wrong here
10:54:15 <wob_jonas> maybe that applies only to x86_32, where the address space is sparse
10:54:48 <shachaf> If it's tight that might mean people map memory near the stack, in which case you're saying the auto-growing stops?
10:55:00 <shachaf> My impression was that a fixed amount like 8MB was mapped at startup and that's that.
10:55:51 <wob_jonas> shachaf: yes, but the mappings are placed in the address space a semi-smart way so that won't happen often
10:57:10 <wob_jonas> shachaf: there's a MMAP_GROWSDOWN flag of mmap for autoextending, but it's possible that it's not actually used for stacks
10:57:19 <wob_jonas> http://man7.org/linux/man-pages/man2/mmap.2.html
10:57:51 <wob_jonas> "Touching an address in the "guard" page below the mapping will cause the mapping to grow by a page. This growth can be repeated until the mapping grows to within a page of the high end of the next lower mapping, at which point touching the "guard" page will result in a SIGSEGV signal."
10:58:08 <cpressey> wob_jonas: It's hard to imagine what that would be used for *besides* stacks
10:58:22 <wob_jonas> cpressey: the manpage explicitly says that it's for stacks
10:58:36 <wob_jonas> but it's possible that it's not used at all these days,
10:58:45 <wob_jonas> or only on certain old architectures
10:58:52 <wob_jonas> or old types of executables or somethin
10:59:24 <HackEso> 7fbfb9a000-7fbfbbb000 rw-p 00000000 00:00 0 [stack]
10:59:36 <wob_jonas> the flags field is 0, so it doesn't actually grow down
11:00:13 <wob_jonas> do we have a typical libc-based x86_32 executable somewhere on hackeso so we can test how that behaves?
11:03:26 <cpressey> "After some tests on an x84_64 Debian machine, I've found that the stack grows without any system call (according to strace)."
11:03:38 <cpressey> https://unix.stackexchange.com/questions/145557/how-does-stack-allocation-work-in-linux
11:04:30 <wob_jonas> sure, but does it grow the mapping, or does it just fault in MAP_NORESERVE pages?
11:05:09 <wob_jonas> ``` perl -e print(0x7fbfb9a000-0x7fbfbbb000)
11:05:09 <HackEso> bash: -c: line 0: syntax error near unexpected token `(' \ bash: -c: line 0: `perl -e print(0x7fbfb9a000-0x7fbfbbb000)'
11:05:16 <wob_jonas> ``` perl -e 'print(0x7fbfb9a000-0x7fbfbbb000)'
11:07:24 <cpressey> wob_jonas: Thinking about it, it's hard to imagine the kernel implementing MAP_GROWSDOWN in a way that doesn't involve a fault. Well, maybe on some hardware, but...?
11:07:52 <cpressey> I guess it's there to give the kernel the freedom to implement it one way or another, depending on hardware?
11:08:31 <shachaf> It seems like another bizarre way to make things unpredictable.
11:08:33 <cpressey> And, the flag should still be shown in /proc/'s view of it?
11:08:41 <shachaf> What if someone accidentally maps pages near the current end of the stack?
11:08:51 <wob_jonas> cpressey: of course it involves a page fault. everything involves a page fault, including allocating mapped pages that weren't used before, or bringing pages in from swap, or just the kernel doing whatever at whim. but the page fault is handled in the kernel, it never raises a signal visible to the process.
11:09:21 <wob_jonas> the process can technically tell whether a page is mapped, but it should not do that except for performance optimizations or debugging
11:09:32 <wob_jonas> faulting pages in is mostly invisible in small amounts
11:09:59 <wob_jonas> obviously it's visible when firefox fills up all my ram and my hard disk starts whirring and the whole system freezes to a halt
11:10:04 <wob_jonas> but extending the swap a little won't involve that
11:10:09 <int-e> hmm that is fairly round, 128k plus 4k
11:11:11 <wob_jonas> shachaf => they won't, because the kernel and libc has heuristics and knobs in sysctl for what address range to map things at, so a single-threaded stack is mapped in places where you can't *accidentally* map something below (you can map something deliberately, but that's your problem),
11:11:41 <cpressey> wob_jonas: Well, trying to answer shachaf's question, "What does it mean for stacks to auto-grow downward? Compared to memory which is mapped and gets faulted in on demand." -- it does get faulted in on demand, by the kernel, transparent to the userland process
11:11:46 <shachaf> wob_jonas: OK, I tested it and it does indeed grow.
11:11:47 <wob_jonas> for multi-threaded it's the problem of the thread library and you may have to specify a hint for the stack space you need if you are starting LOTS of threads, but then you shouldn't start lots of threads
11:12:31 <wob_jonas> anyway, on x86_32 this made sense because the address space was small, and some processes used lots of stack while others used lots of heap
11:12:43 <cpressey> I could imagine a userspace implementation of growing the stack, that the compiler handles via signals or something, but... why?
11:13:16 <wob_jonas> on x86_64 right now it's less important, because we have significantly less RAM than address space, but this may change in our lifetime
11:13:46 <wob_jonas> cpressey: yes, you can do such a stupid thing. there is an interface for handling SIGSEGV gracefully, but it's hard to get right and good only for silly tricks
11:14:33 <wob_jonas> the kernel gives the process all the info about the segfault that it knows in the sa_siginfo or whatever that's called, so the process can know where the fault is
11:14:58 <wob_jonas> you can implement user-space swapping that way, or unusual garbage collectors
11:15:25 <cpressey> Tbh, I hate memory mapping and signals. As abstractions. They're ugly. They're performant, so I see why they're used, but that doesn't mean I have to think they're pleasant.
11:15:43 <wob_jonas> but it's a magnitude more dangerous than just the usual cases when you try to do something nontrivial in a signal handler
11:16:06 <wob_jonas> cpressey: yes, which is why we usually don't do esoteric tricks like this unless they're really needed
11:16:58 <salpynx> int-e: whoa, I thought that edit was a minor phrasing maybe-improvement, but it comes with an essay and it's own project? I'm going to have to read the essay and see if I agree with their point, I have no strong opinion. Correct logic relating to the subject is more important.
11:17:04 <shachaf> `` echo $'#include <stdio.h>\n#include <string.h>\n#include <alloca.h>\nchar buf[1024]; void print_stack() { FILE *f = fopen("/proc/self/maps", "r"); while (fgets(buf, sizeof buf, f) != 0) { if (strstr(buf, "[stack]") != 0) { printf("%s", buf); break; } } fclose(f); } int main(int argc, char **argv) { while (1) { print_stack(); alloca(1024); } return 0; }' >/tmp/f.c;gcc -o /tmp/f /tmp/f.c;/tmp/f>tmp/OUT
11:17:05 <wob_jonas> we just let the kernel guys handle swapping, whether it's swapping to disk, to compressed RAM, or (sigh) to network
11:17:07 <HackEso> /hackenv/bin/`: line 5: 63 Segmentation fault /tmp/f > tmp/OUT
11:17:11 <HackEso> https://hack.esolangs.org/tmp/OUT
11:17:38 <shachaf> ``` perl -e 'print(0x7fbf0d2000-0x7fbf8cf000)'
11:17:58 <shachaf> 8675309 bytes is the maximum. Figures.
11:18:07 <cpressey> I'll stick to writing interpreters in Haskell where I can pretend everything is just math. Beautiful, beautiful math. Which I am bad at.
11:18:15 <wob_jonas> shachaf: I think that may depend on sysctl stuff and perhaps setrlimit
11:18:43 <shachaf> Nope, it's always that number.
11:18:59 <wob_jonas> cpressey: yeah, that aligns with the general good advise to not optimize prematurely
11:19:22 <shachaf> I mean, depends on what you're doing.
11:19:26 <wob_jonas> cpressey: but this is #esoteric so we sometimes talk about silly optimizations
11:20:29 <wob_jonas> shachaf: dude, just last evening I made a program to search solutions to int-e's problem in python, even though that means that the integers take like 128 bytes of memory rather than just the 24 or 32 bytes that I would need in C++
11:20:54 <wob_jonas> but I did at least get preliminary results, and know what my inner loop is and what I would have to rewrite if I wanted to do it better
11:21:10 <Taneb> wob_jonas: I presume that int-e's problem is something other than "someone edited my Wikipedia edit to not use 'comprised of'"
11:21:31 <wob_jonas> Taneb: http://esolangs.org/logs/2019-08-02.html#lY
11:22:04 <shachaf> wob_jonas: The arguments you made about 32-bit stack usage don't work together.
11:22:44 <shachaf> If it's designed the way it is to save on address space, you might accidentally map something into that area (by using the rest of your address space).
11:23:05 <shachaf> So it could certainly be an accident, in exactly the cases that it's supposed to be helping.
11:23:21 <shachaf> If accidental mappings into that area can't happen, why not just pre-map the whole region?
11:23:28 <wob_jonas> shachaf: as a simple model, assume that the stack is mapped near the top of the user address space, and everything else is growing from the bottom of it
11:23:58 <wob_jonas> shachaf: we don't premap because we don't know if you'll have 256 M of stack and 256 M of heap or 2M of stack and 500 M of heap in an 1G address space
11:24:16 <Taneb> wob_jonas: does it have to have all those numbers as prime factors, or just some subset of them?
11:24:40 <shachaf> Just a subset according to https://en.wikipedia.org/wiki/Smooth_number
11:24:53 <wob_jonas> shachaf: but it's possible that it wasn't because of address space, but because when this was invented, they didn't have MAP_NORESERVE yet
11:25:09 <wob_jonas> Taneb: just a subset, at least the way I interpreted it
11:25:25 <wob_jonas> if it was all those numbers, you'd just have to divide the problem by one of each first and get a smaller problem
11:26:26 <wob_jonas> Taneb: https://esolangs.org/logs/2019-08-08.html#lL is the best I got so far. I'll do a better search, but I'm busy with other things
11:26:47 <shachaf> Taneb: I should indeed be in bed right now.
11:28:20 <wob_jonas> or maybe they didn't have a way to not allocate the supporting structures that take up like 1/1000 or 1/500 of the memory mapped, which could be a lot on old systems if you have 100 processes with 8M stack mapping each
11:30:30 <wob_jonas> or maybe it's for some other historical reason that isn't worth to change now
11:34:51 <HackEso> o is a popular comedy fantasy webcomic. It's about a group called the Order of the Stick, as they go about their adventures with minimal competence, and eventually stumble into a plan by an undead sorcerer to conquer the world, and they're out to stop him and conquer their personal problems at the same time. Hopefully not in that order.
11:43:35 <cpressey> tfw an argument breaks out about whether the filename-selecting regex in the config file should be written as /^.*\.(ts|tsx)$/ or as /^.*\.tsx?$/
11:47:29 <wob_jonas> `perl -e print(rand(2)<1 ? "it should definitely be written as /^.*\.(ts|tsx)$/" : "it should definitely be written as /^.*\.tsx?$/"
11:47:30 <HackEso> syntax error at -e line 1, at EOF \ Execution of -e aborted due to compilation errors.
11:47:33 <wob_jonas> `perl -e print(rand(2)<1 ? "it should definitely be written as /^.*\.(ts|tsx)$/" : "it should definitely be written as /^.*\.tsx?$/")
11:47:33 <HackEso> it should definitely be written as /^.*.tsx?
11:47:48 <wob_jonas> `perl -e print(rand(2)<1 ? "it should definitely be written as /^.*\.(ts|tsx)\$/" : "it should definitely be written as /^.*\.tsx?\$/")
11:47:48 <HackEso> it should definitely be written as /^.*.(ts|tsx)$/
11:48:04 <wob_jonas> the other way is unclear and hard to read and should never be committed to production code
11:50:08 -!- Melvar has quit (Quit: rebooting).
11:59:52 -!- Melvar has joined.
12:10:08 <wob_jonas> [ (q: , 0j_2": (9<.@%~_1+10x^50)&-) 11111111111269581656547160489766631945078430800000x
12:10:09 <j-bot> wob_jonas: |domain error
12:10:09 <j-bot> wob_jonas: | (q:,0j_2":(9<.@%~_1+10^50)&-)11111111111269581656547160489766631945078430800000
12:10:19 <wob_jonas> [ (0j_2": (9<.@%~_1+10x^50)&-) 11111111111269581656547160489766631945078430800000x
12:10:19 <j-bot> wob_jonas: _1.58e38
12:10:30 <wob_jonas> [ (q:) 11111111111269581656547160489766631945078430800000x
12:10:30 <j-bot> wob_jonas: 2 2 2 2 2 2 2 3 3 3 3 3 5 5 5 5 5 7 7 7 7 7 7 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 13 13 13 13 13 13 13 13 19 19 19 19 19 19 19 29
12:15:42 <wob_jonas> ^ int-e: that's the best you can get with prime factors up to 29 by the way
12:16:09 <salpynx> int-e: The "regular language Jot" is a subset of "all sequences of 0 and 1" according to the definition of a formal language, if we understand 'all sequences' to also include the empty string. In this case 'all sequences of 0 and 1' is the whole, and contains Jot (a subset). So 'all.seq.0.1 contains Jot' => 'all.seq.0.1 comprises Jot'. Flip it to t
12:16:10 <salpynx> he passive => "Jot is comprised of all all.seq.0.1"
12:16:25 <salpynx> which is what you wrote originally with "Jot is the regular language comprised of all sequences of 0 and 1"
12:16:41 <Taneb> salpynx: I think it's exactly the set of all sequences of 0 and 1
12:17:23 <cpressey> Jot is a programming language, not a formal language
12:17:53 <salpynx> taneb: yes, that's the technicality that makes both versions equally awkward, but allows my somewhat contrived and cheeky justification to hold ;)
12:18:04 <wob_jonas> [ (":@q: ,' | ', 0j_2":-&(9<.@%~_1+10x^50)) 11111111111194529560647830327114462838933000000000x
12:18:05 <j-bot> wob_jonas: 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3 5 5 5 5 5 5 5 5 5 7 7 7 7 7 7 7 7 7 7 7 7 11 17 17 19 19 19 19 19 19 19 19 19 23 23 23 23 23 29 71 | 8.34e37
12:18:06 <cpressey> Unless all Jot does is accept or reject a program. In which case it's a set of strings, i.e. a formal language
12:18:23 <Taneb> cpressey: yes, it's a programming language
12:18:26 <cpressey> and in that case it's almost certainly not "any sequence of 0 or 1s" because that's not a very interesting language
12:19:02 <wob_jonas> [ (":@q: ,' | ', 0j_2":-&(9<.@%~_1+10x^50)) 11111111111185248065004566815208736562760436940800x NB. so far this is worse than the lucky one that I got yesterday
12:19:03 <j-bot> wob_jonas: 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 5 5 11 11 11 11 17 17 17 17 17 17 17 17 17 17 19 19 19 19 23 29 29 29 29 31 47 | 7.41e37
12:19:23 <Taneb> I believe the set of syntactically valid jot programs is equal to the set of all sequences of 0 and 1
12:19:24 <wob_jonas> [ (":@q: ,' | ', 0j_2":-&(9<.@%~_1+10x^50)) 11111111111161923559652900718659162521362304687500x
12:19:25 <j-bot> wob_jonas: 2 2 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 7 7 7 7 7 7 11 13 13 13 17 17 17 17 17 17 23 31 59 | 5.08e37
12:19:30 <salpynx> int-e originally wrote "Jot is the regular language", so I was using that as a starting point
12:19:52 <Taneb> https://esolangs.org/wiki/Jot "Every combination of 0's and 1's is a syntactically valid Jot program, including the null program."
12:20:00 <salpynx> including the empty string
12:23:00 <salpynx> taneb: your point re. them being equal is key, I was using 'subset' to deliberately (and misleadingly) imply 'proper subset', but rely on the technical meaning of S ⊆ S
12:23:47 <wob_jonas> I should make this program output the factoring so that I don't have to enter this command here
12:23:56 <wob_jonas> but then, this command "proves" that I'm not cheating
12:23:59 <salpynx> The tone of the anti-"consists of justification
12:24:24 <salpynx> made me want to construct some kind of argument to justify it on a technicallity
12:25:08 <cpressey> I think you can extend any programming language to have a syntax where any string of symbols is syntactically correct but results in some uninteresting semantics such as terminating immediately and producing no output
12:25:21 <Taneb> cpressey: in Jot it's actually interesting
12:25:58 <salpynx> there's an xkcd for this, effect an effect #326
12:27:15 <wob_jonas> [ (":@q: ,' | ', 0j_2":-&(9<.@%~_1+10x^50)) 11111111111140890057058176051913882460557854562500x
12:27:16 <j-bot> wob_jonas: 2 2 3 3 5 5 5 5 5 5 7 7 11 11 11 11 11 11 11 11 13 13 17 19 19 19 19 19 19 19 19 19 19 19 19 19 19 23 23 23 23 29 29 59 59 | 2.98e37
12:28:37 <cpressey> Saying "There's an xkcd for this" is the modern version of quoting Bible verses.
12:31:45 <cpressey> Taneb: my point was trying to be something about how some languages have this thing called "syntax" and others don't and you can always take "syntax" away if you like.
12:32:39 <cpressey> It's like a front-line triage to eliminate a class of programs we think you won't be interested in running because we haven't defined any particularly interesting meanings for them
12:34:37 <salpynx> cpressey: I think you're right, Jot isn't a formal or regular language, so that's a problem with the sentence. How would you phrase it to indicate that that some regular language all.seq.0.1 describes the syntax of Jot, which I think is the intended meaning
12:36:47 <salpynx> "The syntax of Jot is comprised of the regular language comprised of..."?
12:38:15 <cpressey> "Any sequence of 0's and 1's is a syntactically valid Jot program" ?
12:38:51 <salpynx> there's a news article about this wiki editior: https://medium.com/backchannel/meet-the-ultimate-wikignome-10508842caad I think I shouldn't be trying so hard to counter their pet peeve
12:39:16 <cpressey> I'm deliberately trying to phrase it in a conversational, almost high-school level way
12:40:02 <cpressey> But the smallest change to the page as I see it now might just be "The syntax of Jot is the regular language..."
12:40:05 <salpynx> cpressey: that's constructive. I should go with that :)
12:41:25 <salpynx> is a syntax 'equal to' a formal language?
12:42:12 <cpressey> salpynx: Yes, I think that's fair to say.
12:43:33 <salpynx> ok, I wasn't sure. We are in pedantic territory here, I wanted to be sure.
12:50:03 <wob_jonas> [ try =: ":@q: ,' | ', 0j_2":-&(9<.@%~_1+10x^50)
12:50:08 <wob_jonas> [ try 11111111111122460609418029716397205124244969250000x
12:50:09 <j-bot> wob_jonas: 2 2 2 2 3 3 3 3 3 3 3 3 3 3 3 3 3 5 5 5 5 5 5 11 11 11 13 13 13 17 17 17 17 17 17 17 19 19 19 29 29 29 29 29 29 29 29 29 29 83 97 | 1.13e37
12:55:28 <salpynx> "All sequences of 0 and 1 comprise the regular language that is the syntax of Jot." {0,1}* ⊇ Jot syntax "The syntax of Jot is the regular language comprised of all sequences of 0 and 1" Jot syntax ⊆ {0,1}*
13:01:52 <salpynx> I'll stop now, and sleep on it before making any wiki edits. I was enjoying the counter-pedantry, not sure these really are good edits. Making the first edit and prompting the wiki user to convert it to the passive 'comprised of' version since Jot is the focus of the article would be funny.
13:02:12 <salpynx> All languages are a bit silly.
13:03:40 <salpynx> Strict natural language grammar only seems valid when the last natural speaker is dead.
13:13:27 <int-e> > let x = 11111111111111111215673466122483207680856045806875; y = 10^50 `div` 9 in (fromIntegral (x - y) :: Double, fromIntegral x - fromIntegral y :: Double)
13:14:57 -!- howlands has joined.
13:23:35 <wob_jonas> [ try 11111111111269581656547160489766631945078430800000x
13:23:35 <j-bot> wob_jonas: 2 2 2 2 2 2 2 3 3 3 3 3 5 5 5 5 5 7 7 7 7 7 7 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 13 13 13 13 13 13 13 13 19 19 19 19 19 19 19 29 | 1.58e38
13:25:09 <int-e> wob_jonas: this wasn't a new result; it was just an experiment how bad the cancellation with double precision would be :)
13:26:21 <int-e> 11111111111269581656547160489766631945078430800000 is the optimum for the first 11 and 12 primes.
13:26:39 <wob_jonas> int-e: sure, the one I entered isn't really new either, I just started a search with different parameters while the first one is still running (for an expected three more hours), and it found this while it's still searching the part of the search space that I've already searched fully
13:26:59 <wob_jonas> it will get in a disjoint part of the search space eventually though
13:27:54 <int-e> But I couldn't make my approach for finding the optimum work beyond 15 primes. (I actually ran out of memory (32GB here).)
13:28:13 <wob_jonas> sure, I'm not looking for the optimum with a given number of primes now
13:28:45 <wob_jonas> the three hour long search that I'm running could find solutions with 103 in them in theory
13:28:48 <int-e> (restricting to the first n+m primes was the "first approach" in my paste)
13:32:43 <wob_jonas> the first search should eventually reproduce the best solution that I found during the night
13:37:09 <wob_jonas> [ try 11111111111112819215968651733403643249992663040000x
13:37:10 <j-bot> wob_jonas: 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 5 5 5 5 7 7 7 7 7 11 11 11 11 11 11 13 13 13 13 17 17 19 19 23 23 23 23 23 23 29 29 29 47 53 83 | 1.71e36
13:37:20 <wob_jonas> I'm still afraid that that one was a lucky fluke by the way
13:38:02 <int-e> feel free to suggest a different target to check that theory :)
13:38:27 <wob_jonas> no need, I can just modify my search if I want
13:38:47 <wob_jonas> and my search is still an inefficient single-threaded python3 program
13:39:34 <wob_jonas> it would probably become much faster if I rewrote it to an efficient C++ program
13:39:48 <int-e> what are you using for arithmetic?
13:40:23 <int-e> Pretty sure that's what they use by default.
13:40:24 <wob_jonas> but note that most of the numbers are 1 or 2 or 3 words long, none longer than 3 words
13:40:40 <wob_jonas> and I'm multiplying numbers so that the product doesn't exceed 3 words
13:41:31 <wob_jonas> I think the arithmetic isn't the slow part
13:41:38 <int-e> But you're right; this means that gmp is probably not the bottleneck here; the interpreter overhead should be significant.
13:42:02 <wob_jonas> but the numbers are larger than fits in a small int and so allocated randomly spreaded in the python heap
13:42:29 <wob_jonas> so I have a lot of overhead on memory throughput
13:43:16 <wob_jonas> a proper prorgam would allocate these in a dense array, 32 or 24 or 16 bytes per number
13:43:52 <wob_jonas> (16 bytes means that I use approximate numbers, which means I need extra code to track the exact values for when I find a hit, but it'd be the most efficient)
13:44:50 <wob_jonas> [ try 11111111111111167179461296463398102111816406250000x
13:44:50 <j-bot> wob_jonas: 2 2 2 2 3 3 3 3 3 3 3 3 3 3 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 11 11 11 11 13 13 13 13 13 17 19 19 23 23 23 23 23 23 23 23 53 61 73 | 5.61e34
13:44:54 <wob_jonas> this is the one I found yesterday, right?
13:46:00 <wob_jonas> I should have added code to print anything better than say 1e40, to know if this is a fluke
13:46:18 <wob_jonas> um, anything better than 1e38 rather
13:46:38 <wob_jonas> the code only prints the best found so far, so I won't know if it finds ten other 1e35 sized solutions
13:47:18 <wob_jonas> although technically I could modify the main part of the code and continue from where I stopped it, but I don't want to do that now
13:53:21 -!- salpynx has quit (Remote host closed the connection).
14:02:28 <wob_jonas> [ try 11111111111111158344848599503479968356777700860000x
14:02:28 <j-bot> wob_jonas: 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 5 5 5 5 13 13 13 13 17 17 17 17 17 17 17 17 17 19 19 19 19 19 19 19 19 19 19 31 89 103 | 4.72e34
14:02:49 <wob_jonas> now that one is new, and the best I've found so far
14:02:57 <wob_jonas> and, what do you know, 103 is a factor in it
14:04:27 <esowiki> [[User talk:A]] M https://esolangs.org/w/index.php?diff=65370&oldid=65322 * A * (+2810) /* Blocked */
14:07:29 <esowiki> [[User talk:A]] M https://esolangs.org/w/index.php?diff=65371&oldid=65370 * A * (-606) /* Minimal J for Beginners */
14:08:33 <wob_jonas> I wonder if I should try to make the memory access more predictable by really-deep-copying the large array, in the sense that I copy even the biginteger objects too (an unnatural operation on python) to have them reside mostly sequentially in memory
14:11:44 <wob_jonas> perhaps even 64-bit integers could be enough for the inner loop, and then recheck whenever I get an approximate match
14:13:56 <wob_jonas> That would solve the problem of the inefficient allocation
14:14:10 <wob_jonas> of course I'd still need to keep the array of exact numbers, but those would be rarely used
14:17:10 <cpressey> wob_jonas: I almost hate to mention it but... I wonder how suited Julia would be for this
14:17:39 <cpressey> I don't know how well the problem vectorizes, and it's not like vectorization is automatic
14:19:41 <wob_jonas> cpressey: I don't know, you can take a stab at trying to solve this if you want
14:20:45 <cpressey> wob_jonas: I didn't quite catch what the precise problem is, do you have a link?
14:21:01 <cpressey> I gather it has something to do with prime factorization :)
14:21:17 <wob_jonas> http://esolangs.org/logs/2019-08-02.html#lY
14:22:57 <wob_jonas> I might try to write a more efficient inner loop
14:23:35 <cpressey> factor: ‘11111111111111111111111111111111111111111111111111’ is too large
14:24:32 <int-e> cpressey: 78875943472201*182521213001*25601*9091*5051*271*251*41*11
14:25:04 <wob_jonas> int-e: I tested that it didn't have all small prime factors, but didn't get a full factorization
14:25:44 <cpressey> I doubt I will have much time to play with it
14:26:06 <wob_jonas> your "(10**50-1)/9" was a convincing enough nothing-up-my-sleeve number that I didn't think you'd cheat by choosing a number such that if you add a very small integer it happens to factor up totally
14:26:22 <wob_jonas> plus you already said what the best solution you had was
14:26:37 <int-e> wob_jonas: I didn't intend to cheat... I wanted something where it was easy to see progress :)
14:26:43 <int-e> (look for the first non-1 digit)
14:27:17 <wob_jonas> I don't find it easy to see progress because I can't count 15 ones by hand
14:27:39 <wob_jonas> which is why I have the computer print the difference in %e format
14:27:41 <int-e> wob_jonas: well, I put the numbers in a file, one line each
14:37:07 <cpressey> 13328592851757862349726964424185557245693157222400
14:37:25 <cpressey> [2,3,5,7,11,13,17,19,23,29,31,37,41,43,47,53,59,61,67,71,73,79,83,89,97,101,103,2,2,2,2,2,2,2,11,13,2,3,2,5,11,23,2]
14:37:33 <wob_jonas> [ try 13328592851757862349726964424185557245693157222400x
14:37:34 <j-bot> wob_jonas: 2 2 2 2 2 2 2 2 2 2 2 3 3 5 5 7 11 11 11 13 13 17 19 23 23 29 31 37 41 43 47 53 59 61 67 71 73 79 83 89 97 101 103 | 2.22e48
14:37:51 <int-e> you don't have to use all of the primes
14:38:14 <cpressey> int-e: I know, but I thought it would be a good place to start
14:38:36 <cpressey> I at least fulfilled the letter of the challenge, given my limited time to work on it :)
14:39:00 <int-e> [ try 11111112137520138469338239234808374004904964760870
14:39:01 <j-bot> int-e: |value error: try
14:39:01 <j-bot> int-e: | try 1.11111e49
14:39:12 <int-e> [ try 11111112137520138469338239234808374004904964760870
14:39:12 <j-bot> int-e: |value error: try
14:39:12 <j-bot> int-e: | try 1.11111e49
14:39:13 <int-e> [ try 11111112137520138469338239234808374004904964760870x
14:39:14 <j-bot> int-e: |value error: try
14:39:14 <j-bot> int-e: | try 11111112137520138469338239234808374004904964760870
14:39:21 <wob_jonas> int-e: you need to load my environment first, by like
14:39:25 <j-bot> wob_jonas, changed to wob_jonas,#esoteric
14:39:56 <j-bot> wob_jonas, copied ,#esoteric from wob_jonas,#esoteric
14:40:09 <int-e> oh, per user state
14:40:20 <int-e> j-bot, load: wob_jonas
14:40:20 <j-bot> int-e, copied int-e,#esoteric from int-e,wob_jonas
14:40:22 <wob_jonas> int-e: yeah, it was a strange decision
14:40:33 <wob_jonas> int-e: you can also have multiple sessions in theory
14:41:11 <int-e> anyway, 11111112137520138469338239234808374004904964760870 is the best possible if all primes <= 103 need to be used.
14:41:13 <wob_jonas> int-e: sorry, apparently wrong syntax
14:41:27 <j-bot> wob_jonas, copied wob_jonas,#esoteric from int-e,#esoteric
14:41:43 <j-bot> wob_jonas, changed to wob_jonas,#esoteric and cleared it
14:41:52 <j-bot> wob_jonas, changed to ,#esoteric and cleared it
14:41:59 <wob_jonas> [ try =: ":@q: ,' | ', 0j_2":-&(9<.@%~_1+10x^50)
14:42:09 <int-e> (the target is 111...111/p_1*...*p_27 = 463255947, which is very feasible for brute force)
14:42:44 <j-bot> wob_jonas, working session is ,#esoteric
14:44:55 <wob_jonas> hmm, the three-hour long program is getting close to done
14:45:26 <wob_jonas> I won't run a longer version of that, instead I'll have to improve the program
14:48:51 <wob_jonas> [ try 11111111111115397052046616165917913561809835753472x
14:48:52 <j-bot> wob_jonas: 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 7 11 11 11 11 17 17 17 17 17 17 19 19 19 19 19 29 41 43 43 47 47 | 4.29e36
14:49:21 <wob_jonas> not particularly good, but at least it's new
14:53:32 <esowiki> [[User talk:A]] M https://esolangs.org/w/index.php?diff=65372&oldid=65371 * Areallycoolusername * (+199) repo is almost complete
14:53:56 <esowiki> [[User talk:A]] M https://esolangs.org/w/index.php?diff=65373&oldid=65372 * Areallycoolusername * (-2)
14:54:51 <int-e> okay, that was stupid
14:55:13 <int-e> So I tried (10^49-1)/9...
14:55:33 <int-e> ...I got 1111111111111111140001838971183119543667539375000 which looked eerily similar to 11111111111111111400018389711831195436675393750000
14:56:20 <int-e> Which is nice in that it indicates a certain robustness of the approach. But other than that this should be utterly unsurprising :)
14:56:51 <int-e> (lesson: 10 is smooth)
14:57:53 <wob_jonas> int-e: sure, that's what you get if your search prefers small prime factors, like I do too
14:58:25 <wob_jonas> although the solution above happens to be not divisible by 5
14:58:26 <int-e> Well, mine doesn't do that, exactly. but many of the solutions still feature small prime factors :)
14:58:52 <int-e> I'm so happy that 11111111111111111215673466122483207680856045806875 is not divisible by 2 :)
14:59:48 <int-e> (I also wouldn't be surprised if it was actually optimal... but I see no way of proving such a thing.)
15:06:15 <wob_jonas> the three-hour search finished and didn't find anything better
15:20:28 -!- wob_jonas has quit (Remote host closed the connection).
15:22:51 -!- ais523 has joined.
15:33:20 <cpressey> ais523: I thought of some infinite initial conditions in Conway's Game of Life. For example, you could have an infinite barber pole. It doesn't need caps on the ends, it just goes on forever.
15:33:51 <ais523> you can do that sort of thing in most cellular automata, I think
15:36:03 <cpressey> I don't think there's anything particularly philosophically problematic about them, it's just that I don't know if anyone studies them much
15:37:51 <cpressey> It starts getting problematic when you start considering the infinite configuration which enumerates (and thus contains) all possible finite configurations
15:38:26 <cpressey> If not problematic, then at least kind of weird
15:40:57 <cpressey> One could probably make a philosophical objection that, for a sufficiently complex infinite form, a single tick is doing an infinite amount of computational work
15:41:25 <cpressey> The barber pole is simple and repeatable and you can efficiently predict how any part of it will be at any tick
15:42:06 <cpressey> Anyway, just thoughts I had recently
15:42:41 <ais523> that reminds me of a design in the Game of Life I made a while back, which contains a rake that moves at the maximum possible speed for a moving object
15:43:17 <ais523> so that it creates a constant stream of spaceships that, no matter what they do or what mess they create, will never be able to create anything that shoots down the rake producing them
15:44:02 <ais523> I was wondering if it would be possible to create some sort of infinitely active pattern like that, but what typically happens is that eventually an eater evolves out of the mess and neatly terminates the infinite stream of spaceships
15:53:48 <esowiki> [[Gulp]] N https://esolangs.org/w/index.php?oldid=65374 * Areallycoolusername * (+563) Created page with "'''Gulp''' is an [[esoteric programming language]] made by ][[User: Areallycoolusername|Areallycoolusername]]. It was made for golfing, and it is [[deque]]-based == Specifics..."
15:54:06 <esowiki> [[Gulp]] M https://esolangs.org/w/index.php?diff=65375&oldid=65374 * Areallycoolusername * (-1)
15:54:27 <kmc> by the way since it's CA time in here
15:54:31 <esowiki> [[Gulp]] https://esolangs.org/w/index.php?diff=65376&oldid=65375 * Areallycoolusername * (-2)
15:54:40 <kmc> how much have people studied CAs on arbitrary / random graphs?
15:55:31 <esowiki> [[Language list]] https://esolangs.org/w/index.php?diff=65377&oldid=65207 * Areallycoolusername * (+11) /* G */
15:55:54 <esowiki> [[User:Areallycoolusername]] https://esolangs.org/w/index.php?diff=65378&oldid=65181 * Areallycoolusername * (+11) /* Full List of languages I Made */
15:57:10 -!- ais523 has quit (Remote host closed the connection).
15:58:23 -!- ais523 has joined.
15:59:16 -!- cpressey has quit (Quit: WeeChat 1.4).
16:13:32 <tswett[m]> ais523: "I was wondering if it would be possible to create some sort of infinitely active pattern" - Golly comes with a couple of patterns that are more or less like that.
16:14:10 <ais523> yes, but it's hard to prove that they're like that and don't end up eventually chasing down and destroying themselves
16:14:47 <tswett[m]> Anyway, I've been thinking about replicators in CAs.
16:15:26 <tswett[m]> There's von Neumann's CA with, what, 27 states?
16:16:08 <ais523> the Game of Life has replicators but in general it seems like a really fragile system
16:16:26 <ais523> like, a random block or glider somewhere can completely break a complex pattern and there's nothing you can really do about it
16:17:22 <tswett[m]> I've seen... three general categories of replicators, I think?
16:17:56 <tswett[m]> Or, rather, three categories of CAs with replicators.
16:18:17 <tswett[m]> There are "naturalistic" CAs like Conway's Life, and Star Wars or whatever.
16:18:36 <tswett[m]> Simple rules, huge and complicated replicators.
16:19:22 <tswett[m]> There are von Neumann-style CAs. Complicated rules, large and complex replicators, but not quite as bad as the Conway's Life ones.
16:19:46 <tswett[m]> Then there are Langton's Loops-style CAs. Medium-complexity rules, very simple replicators.
16:22:12 <tswett[m]> The upside of the naturalistic and VN-style CAs is that you can build universal constructors in both of them.
16:22:27 <tswett[m]> I don't think Langton's Loops supports universal construction. I don't know if it can even do computation.
16:23:58 <tswett[m]> Let's see, I'm trying to remember why LL requires a "sheath". You know what I'm talking about?
16:24:20 <ais523> sort of, my memory of this is pretty vague
16:24:50 <tswett[m]> Similar to WireWorld, LL has "wires" that signals can go along. But unlike in WireWorld, the wires have to be surrounded on each side by a special state.
16:25:56 <esowiki> [[Braincells]] https://esolangs.org/w/index.php?diff=65379&oldid=65363 * HereToAnnoy * (+3169) Hopefully the spec is finished, just need execution examples and clarification. WIP
16:26:06 <tswett[m]> Hmmmm, here's one reason. In WireWorld, there's a particular state which indicates the tail of a pulse. In LL, on the other hand, the state for the tail of a pulse is identical to the background state.
16:26:40 <tswett[m]> So the "sheath" state is necessary to regenerate the wire in the wake of a pulse.
16:26:44 <tswett[m]> I wonder why it was designed that way.
16:27:55 <ais523> maybe it increases the chance of the wires acting sensibly when they collide?
16:28:55 <tswett[m]> Well, let me look up how construction works in LL.
16:33:19 -!- ais523 has quit (Quit: quit).
16:34:37 <tswett[m]> I'm reading Langton's paper in which (I think) he describes LL. http://www-users.york.ac.uk/~gt512/BIC/langton84.pdf
16:36:11 <tswett[m]> He writes that in order for a CA pattern to be considered properly self-replicating (as opposed to being a pattern which merely "gets replicated" by the rule), it ought to contain some "code" which is both used to direct the replication process, and copied into the daughter pattern.
16:36:32 <tswett[m]> Which is awfully similar to the way that a traditional quine works.
16:36:38 <kmc> that's a tricky definition to make precise
16:36:47 <kmc> it's kind of like the question of whether viruses are alive
16:36:51 <kmc> they need a certain environment to reproduce
16:36:55 <kmc> but so does every lifeform
16:37:41 <esowiki> [[User:HereToAnnoy]] M https://esolangs.org/w/index.php?diff=65380&oldid=63533 * HereToAnnoy * (+101) Added [[Braincells]] to language list
16:39:25 <tswett[m]> I'm also reminded of a definition that someone on the Code Golf Stack Exchange site suggested for a "proper quine".
16:40:15 <tswett[m]> Which is that the program should contain at least one element which codes some *other* element of the program.
16:41:09 <tswett[m]> "Replicating cellular automata", like 1357/1357 or whatever it is, obviously don't allow you to create a replicator which satisfies that criterion, since each element of the mother pattern codes itself and only itself.
16:42:22 -!- b_jonas has joined.
16:42:51 <b_jonas> I should probably write the inner loop in C or C++, with 64-bit floats
16:43:00 <b_jonas> but I don't promise that I'll do it
16:43:28 <kmc> what's it in now
16:43:57 <b_jonas> cpressey: infinite initial condition for game of life can be useful because it lets you send signals at light speed, while otherwise you can only send signals (repeatedly) at half life speed
16:44:23 <b_jonas> so yes, people do study that
16:45:13 -!- FreeFull has joined.
16:45:56 <b_jonas> tswett[m]: Neumann's CA => http://esolangs.org/wiki/Von_Neumann%27s_29-state_cellular_automaton -- not that there's much info there
16:49:38 <zzo38> How to determine what time zone is used to display the recent changes in a MediaWiki service if not registering an account?
16:49:53 <b_jonas> kmc: python, and I only have the bigint version, not one that does the inner loop with doubles and then checks near matches with bigints
16:53:50 <b_jonas> zzo38: what page are you viewing on the wiki?
16:54:41 <kmc> b_jonas: cython is a really easy way to rewrite an inner loop as native code
16:57:01 <tswett[m]> Somebody made a hexagonal-neighborhood loop replicator CA, which is an awfully good idea: https://www.youtube.com/watch?v=_kTMO7oEN8U
17:02:03 <b_jonas> kmc: nah, I want to write it in C++, which I already know, and know enough to be able to figure out how to optimize it properly
17:02:16 <b_jonas> I won't try to learn some new tool for this
17:02:22 <kmc> well, ctypes is also easy
17:02:27 <kmc> but I recommend learning cython at some point
17:02:34 <kmc> you'll get the basics in no time
17:02:42 <b_jonas> but again, feel free to take a stab at the original problem too
17:03:15 <kmc> you pretty much just put c type declarations on your python variables
17:03:36 <tswett[m]> Lemme see. So, in LL, a 7 signal is the signal which indicates that the arm should be extended by one cell. The sheath is the part that actually responds to the signal. You couldn't have the background state respond to the signal directly, because if you're a background cell and there's a 7 next to you, you don't know whether you're supposed to respond to it or not.
17:03:36 <b_jonas> there's no need for anything like that here
17:14:58 <zzo38> b_jonas: The recent changes (but not the esolang wiki; it is a different one)
17:16:28 <b_jonas> zzo38: so the Special:RecentChanges page in the default html view?
17:17:28 <esowiki> [[Semordnilap]] https://esolangs.org/w/index.php?diff=65381&oldid=60021 * Orby * (-8) /* See also */
17:25:50 <tswett[m]> Now I'm trying to figure out why, in Langton's Loops, two consecutive "4" signals are required in order to effect a turn, instead of just one.
17:31:37 <esowiki> [[Language list]] M https://esolangs.org/w/index.php?diff=65382&oldid=65377 * HereToAnnoy * (+17) /* B */ Added Braincells to the language list.
17:32:37 <b_jonas> zzo38: I don't know a good answer. if api.php is enabled (it is not on Wikia), even for read only, then you can use that to query the recent changes in a different format, but that's not trivial
17:33:18 <b_jonas> you can also use api.php to query the default timezone of the wiki, but I'm not sure if that's always the one used on recentchanges for unregistered users or something may override it
17:34:04 <b_jonas> zzo38: you can try asking this question in a mediawiki-related channel on freenode, such as #mediawiki
17:36:48 -!- Phantom__Hoover has joined.
17:37:09 <b_jonas> perhaps one of the SpecialPages also tells the default timezone? I dunno
17:39:23 <b_jonas> https://www.mediawiki.org/wiki/Manual:Timezone says how to set the default timezone and that that's used in Special:RecentChanges, but doesn't say how to query
17:40:45 <b_jonas> zzo38: view the page Special:GlobalPreferences#mw-prefsection-rendering and see what timezone it says there
17:41:34 <b_jonas> hmm no, that doesn't seem to wokr
17:42:05 <zzo38> It says it is an invalid special page
17:45:17 <b_jonas> zzo38: is the api.php enabled? if so, you can try loading /w/api.php?action=query&prop=info&meta=siteinfo&format=xmlfm&siprop=general|namespaces|namespacealiases|interwikimap|specialpagealiases|magicwords and see what it says in the timezone attribute of //general , but I'm not convinced that that's always right because I think there's multiple timezone settings
17:46:20 <b_jonas> but if the api.php is enabled, then it's possible to query the recent changes with it
17:46:31 <b_jonas> and you can compare the date there with the date in the html view
17:50:56 <b_jonas> zzo38: look at /w/api.php?action=query&generator=recentchanges&list=recentchanges and compare its timestamps with the one in /wiki/Special:RecentChanges
17:51:32 <b_jonas> and no, this won't work on wikia, or some other wikis where api.php is not enabled
17:53:14 <zzo38> OK, that works though on what I am trying to access.
17:55:12 <b_jonas> https://www.mediawiki.org/wiki/API:Main_page has the docs for api.php in case you want to go more completely with that, eg. get the Recent Changes from only there rather than just eyeball the timezone
18:01:39 -!- MDude has quit (Ping timeout: 248 seconds).
18:06:06 <esowiki> [[Fit]] M https://esolangs.org/w/index.php?diff=65383&oldid=62321 * HereToAnnoy * (+743) Reduces from Boolfuck --> turing complete
18:13:20 <esowiki> [[Fit]] M https://esolangs.org/w/index.php?diff=65384&oldid=65383 * HereToAnnoy * (+1) Fixed typo : "(-v)+" ---> "(--v)+"
18:36:11 <esowiki> [[A1]] https://esolangs.org/w/index.php?diff=65385&oldid=59728 * Orby * (-5) /* See Also */
18:54:50 <tswett[m]> I wonder if I could come up with a cellular automaton that's kind of "in between" Codd's CA and Langton's Loops.
18:55:48 <tswett[m]> LL only has two commands: extend forwards and extend left. That's great as long as little square loopy replicators are the only thing you ever want to make.
18:57:04 <tswett[m]> Codd's CA has lots of commands that do lots of things, but a replicator in that CA is very complicated.
19:04:06 <tswett[m]> https://en.wikipedia.org/wiki/Codd%27s_cellular_automaton
19:05:02 <tswett[m]> I'm trying to think how how LL might be extended to permit, say, a loop with a kink in it.
19:06:02 <tswett[m]> In LL, replication essentially consists of just executing the program four times. It does exactly the same thing the first three times (extend for a while and then turn left), and something different the last time (extend for a while, then collide, causing various interesting stuff to happen).
19:06:11 <b_jonas> tswett[m]: wait, isn't that the same as Langton's loops?
19:06:28 <b_jonas> they both have eight states on a square grid and make squares
19:06:43 <tswett[m]> No, Codd's CA came first. Note that the pictured loop doesn't replicate; it merely extends an arm forever.
19:07:25 <tswett[m]> If you tried to make "a Langton's loop" in Codd's CA, you'd find that the program to generate one side of the loop is too long to fit inside of the loop.
19:07:59 <tswett[m]> Langton takes advantage of the fact that producing one cell of the loop requires a 3-cell instruction, but the program is executed 4 times, and 4 > 3.
19:08:00 <Hooloovo0> there's a lot of loop rules in golly's sample directory
19:09:32 <Hooloovo0> there's also a couple of similar-to-codd rules like devore
19:10:07 <tswett[m]> Yeah, the Devore rule is pretty much a strict improvement of Codd's rule. It's better in every way and lets you build a much, much smaller replicator.
19:11:40 <tswett[m]> I'm pondering this "loop with a kink in it" idea. You could do that with something very similar to a Langton's loop, if only you could somehow make it so that certain parts of the program are only executed some of the time.
19:14:48 <tswett[m]> So now I'm just thinking about how to accomplish that.
19:19:23 <tswett[m]> You'd want some way to store a finite state, and suppress some of the program some of the time depending on the state.
19:20:50 <tswett[m]> Hmmmmmmmm. I like the way my thoughts are going. :D
19:24:36 <int-e> daisy daisy... ah no, that was HAL's mind going.
19:26:47 <tswett[m]> The mother loop can be totally passive, and merely send the program out over and over again. The construction arm can have a part on it that's a state machine and filter.
19:31:37 <esowiki> [[Post Dominos]] https://esolangs.org/w/index.php?diff=65386&oldid=60349 * Orby * (-5) /* See also */
19:38:49 <kmc> we are all merely codons within the mother loop
19:49:48 -!- lldd_ has joined.
20:10:45 -!- Lord_of_Life_ has joined.
20:14:07 -!- Lord_of_Life has quit (Ping timeout: 246 seconds).
20:14:08 -!- Lord_of_Life_ has changed nick to Lord_of_Life.
20:15:54 <tswett[m]> I think I've figured it out. I can do almost everything with 12 states.
20:21:18 <esowiki> [[Minaac]] https://esolangs.org/w/index.php?diff=65387&oldid=59930 * TheJebForge * (-120) /* Minaac */
20:23:22 -!- lldd_ has quit (Quit: Leaving).
20:27:32 <b_jonas> tswett[m]: implement it in GPU :-)
20:27:49 <tswett[m]> I'm gonna implement it in Golly. :D
20:49:47 <rain2> I have no idea how fractran works
21:07:42 <shachaf> a positive natural is a multiset of primes hth
21:25:25 <b_jonas> do you mean like 11111111111115397052046616165917913561809835753472 ? is that a multiset too?
21:34:20 <shachaf> `factor 11111111111115397052046616165917913561809835753472
21:34:21 <HackEso> factor: ‘11111111111115397052046616165917913561809835753472’ is too large
21:34:27 <shachaf> Apparently it's too large to be a multiset.
21:37:41 <b_jonas> learn BLJ is a move that lets you solve NP-complete problems with no stars and just one key.
21:55:49 -!- b_jonas has quit (Remote host closed the connection).
22:35:40 <Sgeo_> Retroforth's case statement causes the function that's using it to exit if the condition is met
22:39:17 <kmc> so, know anything interesting about cellular automata on random graphs?
22:39:30 <kmc> seems like you could model some social behaviors that way
22:40:46 -!- xkapastel has joined.
22:52:07 -!- Phantom__Hoover has quit (Ping timeout: 245 seconds).
23:02:20 <shachaf> kmc: Golly. I've never even thought about cellular automata on non-planar graphs.
23:06:01 -!- FreeFull has quit.
23:19:55 <kmc> well I guess you're about ready then!
23:22:54 <shachaf> i'm ready for fluffy cat whiskers
23:42:20 <tswett[m]> I successfully made a loop that extends itself. Woo.
23:43:35 <shachaf> int i = 0; while(1) { this_block.append("print(%d);", i); i++; }
23:44:04 <Sgeo_> At some point I should make a new implementation of Braintrust. Except the best language for that is probably either Common Lisp or maybe some ... assembly like thing? And I have no interest in Common Lisp these days
23:44:25 <shachaf> Why not do it in ALGOL 68?
23:44:34 <shachaf> I think ALGOL 68 is quite an interesting language.
23:44:52 <shachaf> It has features not present in many or any modern languages.
23:45:09 <shachaf> And certainly in languages that existed in 1968.
23:45:13 <Sgeo_> Does ALGOL 68 have functionality to preserve the current state as an executable that will run another function when started?
23:45:42 <shachaf> I imagine think you could dump memory to a file and load it back up later.
23:46:06 <Sgeo_> https://ccl.clozure.com/manual/chapter4.9.html
23:46:48 <Sgeo_> Or maybe I could just... directly copy+modify the current executable, if there's specific data in the executable in a predictable location
23:47:05 <shachaf> That seems like a reasonable approach.
23:47:26 <shachaf> `` xxd tmp/out.a > tmp/out.a.xxd
23:47:26 <HackEso> /hackenv/bin/`: line 5: xxd: command not found
23:47:58 <shachaf> `` hd tmp/out.a > tmp/out.a.hd
23:48:04 <HackEso> https://hack.esolangs.org/tmp/out.a.hd
23:48:04 <Sgeo_> I'm wondering if that's sort of like cheating, to compile into an interpreter and call it a "compiler"
23:49:24 <esowiki> [[User talk:A]] M https://esolangs.org/w/index.php?diff=65388&oldid=65373 * A * (+287) /* Concern */
23:57:03 <kmc> Sgeo_: that's one of the futamura projections. a compiler is a curried interpreter
23:57:10 <kmc> you provide the program and at some point later you provide the program's input
23:57:21 <kmc> so the question is just how much optimization takes place at the earlier point