00:37:09 <esowiki> [[XENBLN]] M https://esolangs.org/w/index.php?diff=69961&oldid=69960 * PythonshellDebugwindow * (+1) /* Commands */

00:38:34 <esowiki> [[XENBLN]] M https://esolangs.org/w/index.php?diff=69962&oldid=69961 * PythonshellDebugwindow * (+37) /* Number non-literals */

06:54:50 <esowiki> [[Feather]] M https://esolangs.org/w/index.php?diff=69963&oldid=53242 * IFcoltransG * (+12) Added dead link template to broken link

07:11:48 <esowiki> [[Clue (oklopol)]] https://esolangs.org/w/index.php?diff=69964&oldid=69957 * IFcoltransG * (+172) /* Syntax */ Rewrote some sentences for clarity, fluency, and fixing of a spelling mistake

07:14:58 <esowiki> [[Clue (oklopol)]] https://esolangs.org/w/index.php?diff=69965&oldid=69964 * IFcoltransG * (+65) /* Syntax */ Added note on spaces in identifiers

07:54:10 <zzo38> Is it possible in PostScript to specify output page numbers using the "pdfmark" command?

07:55:51 <zzo38> (Also, do any PostScript interpreters that do not produce PDF support any subset of uses of pdfmark?)

08:12:18 <esowiki> [[Feather]] https://esolangs.org/w/index.php?diff=69966&oldid=69963 * Oerjan * (-32) Link to esolangs.org log instead

08:17:05 <zzo38> Also, I think I read somewhere that if you make DjVu output from PostScript then it tries to figure out automatically what to put into foreground/background. Is there a way to specify it by yourself? I don't really know if it make sense, as I don't know how the DjVu is working.

13:19:06 <esowiki> [[FAKE]] M https://esolangs.org/w/index.php?diff=69967&oldid=68815 * Argv0 * (+15) /* Commands */ conditional operator syntax change

13:21:29 <esowiki> [[FAKE]] M https://esolangs.org/w/index.php?diff=69968&oldid=69967 * Argv0 * (-12) /* Examples */ adaptation to conditional operator syntax change

17:16:20 <esowiki> [[Esolang:Introduce yourself]] https://esolangs.org/w/index.php?diff=69969&oldid=69900 * Amakukha * (+206) /* Introductions */

17:17:05 <esowiki> [[User:Amakukha]] N https://esolangs.org/w/index.php?oldid=69970 * Amakukha * (+62) Created page with "My favourite [[Brainfuck|Brainf**k]] dialect is [[Brainlove]]."

17:30:50 <esowiki> [[Brainfuck]] https://esolangs.org/w/index.php?diff=69971&oldid=69560 * Amakukha * (+95)

17:32:06 <esowiki> [[Urban Mller]] https://esolangs.org/w/index.php?diff=69972&oldid=30824 * Amakukha * (+103) /* External resources */ +recent talk

17:36:16 <esowiki> [[Brainfuck]] https://esolangs.org/w/index.php?diff=69973&oldid=69971 * Amakukha * (+22) BF was inspired by FALSE according to the video

17:37:26 <esowiki> [[Esolang:Introduce yourself]] https://esolangs.org/w/index.php?diff=69974&oldid=69969 * Pitust * (+117) I need to complete the sign-up, right?

18:21:00 <int-e> tromp: Ooph, I pushed my formalization effort through. https://int-e.eu/~bf3/tmp/Goodstein.html

18:25:15 <int-e> tromp: The issue with these formalizations is that one has to make many intuitive steps more detailed. This is hard, and easily ends up in a mess. In this case the property that stood out to me was this: Consider the map that takes a number in hereditary base n and convertes it to heriditary base n+1. So for n=2, we map [0..8] to [0,1,3,4,9,10,12,13,27]. This map is monotonic. Obvious? Yeah,...

18:55:14 <int-e> tromp: It's actually a subscript (for the next character) in the editor. Could be replaced by _ for ASCII.

18:57:04 <int-e> tromp: If you look at the .thy file it's actually \<^sub> there... but nobody wants to type or read things like that, I believe.

19:00:44 <int-e> And yeah, it was quite some effort. I mean, you brought this up a week ago; I spent maybe a day wrapping my head around why things work, and realized that while I kind of trusted what I did, it was subtle enough to have gone wrong somewhere... so I foolishly started to formalize and it turned out to be a bit trickier than anticipated.

19:03:18 <int-e> And I haven't decided yet where to put it permanently. But I'll find a public place.

19:13:42 <esowiki> [[Esolang:Introduce yourself]] https://esolangs.org/w/index.php?diff=69975&oldid=69974 * Asasnat * (+306)

19:14:19 <esowiki> [[User:Asasnat]] N https://esolangs.org/w/index.php?oldid=69976 * Asasnat * (+31) Created page with "hi im guy who likes programming"

19:17:52 <int-e> b_jonas: Nah, I'm not in the mood for hydras, though that should fit into the same kind of ordinal number representation.

19:24:04 <int-e> b_jonas: Also the point here is that the Goodstein problem is reall subverted... we get termination for free (essentially) because it's all primitive recursion. (And the lambda calculus version is typeable in System F... again termination is free.) But proving that it actually implements the Goodstein iteration is where all the effort goes instead.

19:31:52 <arseniiv> suppose you have a biased coin with probability of heads p. How do you find the most uniform distribution on a set {1, …, k} which can be realized by n flips of that coin (and assigning each outcome to 1, …, k)? I came to a greedy algorithm which ended up not optimal. Brute force search suffers from combinatorial explosion even for quite small n, k

19:34:25 <int-e> arseniiv: You should try this months' Ponder This problem, it has a similar (but more awkward) binpacking flavour.

19:34:51 <arseniiv> my algorithm was the following: sort coin flip outcomes from the most probable to the least probable and then assign each one in turn so that the current (incomplete) distribution has the largest entropy. Sadly, for (n, k, p) = (4, 2, 0.7) that gives a suboptimal answer

19:36:11 <arseniiv> int-e: ok I’ll chek that out. Though I’m not at all into packing, this one just popped from a discussion of making almost uniform distributions with a fair coin, which is far more trivial

19:36:18 <int-e> arseniiv: It's something that /should/ be NP-hard but will be awkward to actually prove NP-hard.

19:37:57 <arseniiv> I reckon if we have a completely arbitrary distribution instead of the binomial here, it could get tricky enough to be more obviously NP maybe

19:38:02 <int-e> Because bin-packing is NP-hard and I don't see how the additional structure that is there helps once you get close to the optimum.

19:39:36 <int-e> (Though obviously I may just lack imagination. My "awkward to prove" includes the possibility that it's actually false for some incredible reason.)

19:40:05 <arseniiv> ❝Your challenge this month is to design a game with 10 ladders and snakes altogether that will lead to an expected number of moves (rounded to the 6th decimal place) of 66.978705.❞ => omg

19:40:21 <int-e> arseniiv: If you have an arbitrary partition of 1 into positive rational numbers, and you try to split them as closely into two parts as possible, that *is* NP-hard.

19:41:38 <b_jonas> int-e: but isn't it more like a lattice small vector search problem, with the O(sqrt(n)) types of boxes around n/p counting?

19:41:47 <int-e> (Deciding whether you can get equality is NP-complete, cf. https://en.wikipedia.org/wiki/Partition_problem)

19:42:46 <arseniiv> int-e: oh, that’s pretty close! I’ll tell this to the person which was interested in the packing. I bet 3, 5 etc. parts aren’t becoming not NP-hard suddenly too

19:43:55 <int-e> b_jonas: The downside is that the number of pieces you're working with is exponential in the input size.

19:44:31 <b_jonas> int-e: yes, but they're identical pieces, like in a small vector search problem, or like in that problem we considered some months ago about finding a number close to a target with only small divisors

19:48:13 <int-e> Hrm, we should probably make the output size the parameter here, not the input size (because the input is compressed too much. The output is a partition into k parts, each of which can be described by n (that's the number of different probabilities) n-bit numbers.)

19:49:46 <b_jonas> all the other types of boxes are small even if you take all the available ones together

19:51:23 <b_jonas> hmm, maybe they can be if the optimum is exponentially close to 1/k so you have to approximate it very closely

19:56:12 -!- LKoen has quit (Quit: “It’s only logical. First you learn to talk, then you learn to think. Too bad it’s not the other way round.”).

19:59:13 <int-e> arseniiv: This should approximate very well, but except for small n I would not have big hopes of finding the optimum. And I'm not really sure how small "small" is here.

20:01:42 <b_jonas> arseniiv: I think how you find good approximations for these is to type your program into an irc channel full of geeks and hope they'll spend their time to torture their computer to find a good approximation and tell you

20:04:24 <int-e> arseniiv: I guess you can also make a collection of micro-optimizations... different multisets of proabilities that have almost the same sum, that you can swap between the partitions to make small adjustments.

20:05:23 <int-e> arseniiv: At the very least this gives you something to burn CPU cycles with, in contrast to the greedy algorithm which will be over very quickly and never get a better answer.

20:06:10 <arseniiv> int-e: b_jonas: (and others interested) tangentially, how would you generate *truly* uniform variates (on that same finite set 1..k) having an infinite sequence of fair coin flips? I believed one simple algorithm using a binary tree is optimal until I calculated the entropy several years later. Now I was told another algorithm, quite a more efficient one as it seems, though I haven’t checked its optimality by myself. Could you reinvent it? (It should

20:06:59 <b_jonas> arseniiv: if k is small, throw your coin k times, if it is head the rth time but tail all other k-1 times, then your result is r, otherwise retry

20:07:36 <int-e> arseniiv: You can get as close as you want to log_2(k) coin flips per choice, but at the expense of keeping increasingly bigger entropy buffers.

20:08:59 <arseniiv> b_jonas: no, I have no practical need to solve instances of this problem, though it’s a bit interesting as we can see here :)

20:09:15 <b_jonas> arseniiv: or in practice, flip the coin enough times to get more than 256 bytes of entropy (how many times that is depends on p), feed that to initialize a cryptographic pseudonumber generator, and use the outputs of that for all the randomness you want to generate

20:09:49 <b_jonas> you can get an arbitrarily long random output, nobody will notice that the seed is only 256 bits long

20:11:23 <arseniiv> <int-e> arseniiv: You can get as close as you want to log_2(k) coin flips per choice, but at the expense of keeping increasingly bigger entropy buffers. => yeah that’s what that newer algorithm needs, but it has a neat and very concise description which is yet to be spoiled here

20:12:27 <b_jonas> so if you know that your coin's probability is between 1/4 and 3/4, then the entropy of one flip is at least 0.81 bits, so flip 632 times and use that as your seed

20:13:34 <b_jonas> if you need only a few random outputs and you want fewer coin flips, then use arithmetic coding to approximate a real number with smaller and smaller intervals after each coin flip, and generate your randomness from that real number when the output is certain

20:14:26 <b_jonas> there's probably also a way to use arithmetic coding if you don't know the exact value of p, but it might be ugly

20:14:48 <b_jonas> at least I don't know a way how you'd do that optimally, but I could probably do it optimally by constant factor of flips

20:16:01 <arseniiv> the description they gave me was this (for the fair coin): use the coin to divide a line segment into halves etc., and when the small subsegment is completely inside of one of k equal parts of the original segment, you have one value; then you divide that part into k parts and continue further. So I definitely see the need for unbounded entropy storage but the description is quite neat in itself

20:16:32 <arseniiv> <b_jonas> if you need only a few random outputs and you want fewer coin flips, then use arithmetic coding to approximate a real number with smaller and smaller intervals after each coin flip, and generate your randomness from that real number when the output is certain => ah you wrote it all already

20:17:01 <arseniiv> so now I see how that person gave an answer almost immediately: they probably knew arithmetic coding too

20:17:53 <arseniiv> <b_jonas> and then use the HT and TH results as your fair flip input => definitely cool thing

20:18:11 <b_jonas> arseniiv: https://www.perlmonks.com/?node_id=877696 has an example code for arithmetic coding with big integers; practical algorithms for compression are trickier because they want to stream the output without having to keep all the input or all the output in memory

20:19:21 <b_jonas> it's an example of steganographic typography: I use a known pre-existing text verbatim, and hide information in its presentation

20:19:30 <int-e> arseniiv: (It's not code because I (deliberately) didn't include the policy for balancing refills and making choices.)

20:20:29 <arseniiv> and the suboptimal algorithm was to have a set of nodes, just one node at the start, and double them for each coin flip, giving an answer if there are enough nodes, and continuing with the remainder of nodes otherwise. That gives quite a heavy entropy loss for some k but it runs in constant space

20:22:51 <int-e> arseniiv: I made this a sketch because I wanted to make the following point: This procedure does not waste any entropy, except in the FAIL case in make_choice. So in the long run, you need to minimize the probabilitiy of that case... which means you want to make the range large. The downside of having a large range is startup cost, and, if you overdo it, expensive computations. But in practice,...

20:27:11 <arseniiv> ah so it’s unbounded if we don’t want to lose entropy but if we can afford it I see, that can be simplified to the tree algotihm even

20:28:05 <arseniiv> but that one isn’t good because we can’t tweak it, and a normal arithmetic coding is tweakable

20:30:23 <int-e> The add_flip can be generalized to adding an arbitrary uniform choice (multiply by number of choices c instead of 2; add value [0..<c])

20:31:07 <arseniiv> <int-e> what do you want to tweak? => the amount of entropy we discard/keep. I just hadn’t thought about that at all before I wrote here, as I wasn’t aware in full that we need to have unbounded space to not lose entropy

20:31:56 <int-e> And then you can sample a distribution described by frequencies [a1,a2,...,an] in three steps: 1) sample uniformly from a1+a2+...+an choices. 2) find corresponding ak and offset into ak chunk (in range [0..<ak]). 3) add back a uniform choice of size ak with the offset as value to the pool.

20:33:11 <int-e> Also, range coders are a kind of arithmetic coders anyway; the differences aren't so big.

20:36:33 <int-e> For this particular application (fair die rolls) I like the clean invariant (v uniformly random in [0..<r]), where with arithmetic coders one often has to worry about rounding or switch to rational numbers.

20:37:02 <int-e> But for the more common practial application (namely, compression) that hardly matters.

23:38:50 <zzo38> What I think should be a document storage format, is storing compressed rasters, with the ability to efficiently encode with multiple resolutions and/or colour spaces, and supports encoding the represented text (for use of search and copying), and a subset of pdfmark features, but not all of them. (DjVu is almost like this, I think.)

23:45:04 <zzo38> The subset of pdfmark features I think should be wanted are: /PAGELABEL, /ANN (only for links to pages in the same document; the only optional keys allowed are /Page and /View, and the only valid view type is /XYZ, and the zoom level isn't used), /OUT (with the same restrictions as for /ANN, except /Count is also allowed).