01:09:45 -!- LKoen has quit (Quit: “It’s only logical. First you learn to talk, then you learn to think. Too bad it’s not the other way round.”).

03:42:56 <zzo38> C doesn't have macros that are expanded only where a variable name is expected (similar to vardef in METAFONT).

04:28:22 <zzo38> I have read that cell phones cannot send/receive faxes due to the codec in use, but is the codec compatible with slow scan format?

08:47:09 <cpressey> Defining a function is like adding an axiom, in the sense that it asserts that the hitherto-meaningless name of the function now has such-and-such meaning.

09:01:57 <int-e> There's a definitional principle by specification: If you prove (exists x. P(x)), then that allows you to introduce a fresh constant k, and assert P(k) as an axiom.

09:02:41 <int-e> The proof of exists x. P(x) is fairly trivial if P(x) has the form x = foo with x not occurring in foo.

09:09:57 <int-e> Hmm. I believe Isabelle/Pure really only has the special case of definition by equality. Isabelle/HOL adds a Hilbert choice operator, so you get the specification principle by defining k = SOME x. P(x), and noting that exists x. P(x) ==> P(SOME x. P(x)) <==> P(k).

09:13:45 <int-e> And you want proper definitions in every practical proof framework because they are crucial for compression.

09:14:52 <int-e> In *theory*, of course you can treat definitions and lemmas as abbreviations to be unfolded.

09:15:25 <rain1> but if you stick to functions with no infinite loops they can be read as theorems (even if they are often boring ones)

09:15:49 <Taneb> I think all functions are proofs of theorems, just not necessarily in a consistent logic

09:17:02 <int-e> When I write a function of type Nat -> Nat, I usually care about the values. But usually not enough to try to specify what they are in the type.

09:18:18 <cpressey> If I use a dynamically typed language, then I am using a "degenerate type system with only one type" in the view of type theory, and I imagine that make it difficult to form any interesting proofs. But I can *program* in it just fine.

09:18:50 <int-e> And yes, indeed, the potential for nontermination makes the picture of functions as proofs even more inaccurate.

09:19:52 <int-e> Taneb: It's about giving a *name* to a value, which is the essence of a definition of a function.

09:21:33 <int-e> Of course this is my own interpretation of something cpressey wrote. I don't know whether it's the intended one.

09:22:51 <cpressey> int-e: I feel like I *kind of* follow your explanation with P(x), if, to prove that x (for example factorial) exists you only need to supply the definition of it (a term exists that we can call "factorial")

09:24:33 <cpressey> But then, what is P() in general? It seems like it could be the evaluation relation for your environment?

09:28:15 <int-e> (Which specifies the factorial function, assuming f : nat -> nat, but makes existence of f non-trivial.)

09:30:31 <int-e> cpressey: What I'm saying is very much inspired by how Isabelle/HOL deals with these things.

09:32:23 <int-e> For example, there is a function definition package that *can* deal with the above specification of the factorial function. Internally it'll extract a dependency relation (here, x+1 -> x; in general those will be tuples of all the arguments) and shows that it is well-founded (for which it has various heuristics)... and then one can show that f exists by induction on that relation.

09:36:45 <cpressey> rain1: Yeah, and even when it's not uniquely determined, you get at least one theorem "for free" by parametricity, as I understand it.

09:38:06 <rain1> if a function works for all types you know it doesn't inspect and case analysis on data of that type

09:46:24 <cpressey> On the other hand, it's possible for a function to uphold an invariant that cannot be captured by any type.

10:21:24 <b_jonas> cpressey: definitions aren't like axioms in that they don't let you prove things that were meaningful but you couldn't have proved from your existing axioms.

10:29:43 <cpressey> My comparison to axioms was meant to highlight that before you give the definition of factorial there's no way in hell you can deduce the truth or falisty of the statement "factorial(5)=120"

11:22:52 <ais523> in which the final step is "check if this hash has ever been output for a different input file; if so, increment the hash and repeat this step"

11:23:38 <b_jonas> ais523: I'm not sure if it's well-defined whether a hash has been output, and which of two hashes has been output first.

11:24:14 <b_jonas> ais523: and no, (a) the magic box does more than just produce hashes, and (b) it uses a longer digest and one that's cryptographically safe so you can't produce collisions

11:24:45 <ais523> the thing that the magic box produces is a hash by definition, isn't it? except that it might not be deterministic

11:25:15 <b_jonas> ais523: it need not be. it could be just an entirely random value that it doesn't compute from the input or anything

11:25:16 <ais523> the magic boxes are slightly more useful if you also add the proviso that the same code will be returned for anyone, if they provide the same file as input

11:25:35 <ais523> I would argue that *even if* it isn't computed from the input, it's still a hash function

11:25:40 <b_jonas> ais523: sort of but not really. if you want to check for collisions, you can just use an ordinary hash independent of the magic box.

11:26:26 <ais523> it takes an arbitrary data structure and returns a single integer, consecutive from 0

11:27:18 <ais523> another idea I had, along similar lines, was a function that interns pairs of integers

11:27:30 <ais523> when you have this, your language no longer needs data structures, as you can do cons just on integers

11:27:55 <ais523> (there are bijections between Z² and Z, so I guess you wouldn't even need to use interning if you didn't mind the size of the numbers exploding)

11:28:14 <b_jonas> ais523: that's what cons does if there's no garbage collection, except I think if you say interning, you want a deterministic function

11:29:35 <ais523> anyway, I think "content-addressable storage" is the general term for your magic boxes, except that two different CASes are not usually linked to each other

11:29:46 <b_jonas> that sounds typical for these golf languages, normally you wouldn't want to intern every pair, because it's sort of slow, but in a golf language you might not care

11:30:16 <ais523> yes, this seems like an operation that is probably usable by golfing languages only

11:30:46 <b_jonas> ais523: yes, and https://valerieaurora.org/monkey.html is the clear explanation of them

11:31:54 <b_jonas> the magic boxes are not only linked, but can also store practically any amount of data (limited by the boxes' bandwidth) forever for free, unlike in the real world

11:32:45 <b_jonas> ais523: rsync => not in the traditional system, but it uses clever use of hashes to likely find matching infixes among files, just like how some delta compression algorithms do

11:33:47 <ais523> Riot Games wrote an article about how they do incremental updates, without needing to know what's already installed on the target computer

11:33:55 <b_jonas> because deterministically finding all common infixes is too expensive, but finding enough of them to compress well in typical practical cases is good enough

11:34:29 <ais523> it works by breaking files into sections based on their content, and sending the hash of each section that should be in the new files

11:34:48 <ais523> then the existing files are used to find as many of those sections as possible, and the ones which aren't found are downloaded

11:35:33 <ais523> incidentally, A Pear Tree's current algorithm for finding sections of files with a CRC of 0 is quadratic

11:36:10 <ais523> so maybe there isn't even a need to define the sections in terms of content, you could just do it in terms of, say, a CRC-128

11:36:39 <ais523> (you would expect no random collisions with one of those; even though it isn't cryptographically secure, it seems likely that any actual collisions would have been planted by a human)

11:37:13 <b_jonas> ais523: ah yeah, that reminds me of one of my favorite algorithmic problems. you get a vector of nonnegative machine-sized integers, find the infix in that vector that maximizes the product of the length and the minimum of that infix.

11:37:41 <b_jonas> if n is the length of the vector, there's a trivial O(n**3) algorithm, an easy O(n**2) algorithm, and a tricky O(n) algorithm

11:40:02 <ais523> there's an easy O(nk) algorithm too, which might or might not be faster depending on the size of the numbers

11:42:53 <ais523> I think the easy O(n²) algorithm is "for each element in the vector, scan forwards until you find a smaller element, then take the maximum among the infixes calculated this way"

11:43:13 <b_jonas> in case you want to experiment, a test vector is [138, 222, 263, 156, 176, 58, 116, 211, 293, 481, 352, 217, 454, 639, 423, 580, 819, 716, 845, 999, 936, 714, 621, 569, 361, 175, 52, 57, 33] and the largest product is 5121, I have two longer test vectors that I can paste

11:44:07 <ais523> the easy O(nk) algorithm is "for each k from 1 to the highest element in the vector, look for maximum-length infixes whose minimum is k, then take the maximum among the infixes calculated this way"

11:46:04 <ais523> easy O(n log n): sort the indexes of the vector by their values, then consider the infixest between the smallest and second-smallest, second-smallest and third-smallest, etc., values

11:46:48 <ais523> ah no, you have to extend each of those infixes outwards until you reach a value that's smaller

11:48:19 <ais523> more difficult O(n log n): consider the entire list as an infix, and recursively apply this algorithm to prefix before the smallest element, suffix after the smallest element

11:48:32 <ais523> this is O(n log n) if you speed it up by locating all the element locations in order from smallest to largest first

11:48:47 <ais523> because, then you'll always know what the smallest element in the infix you're considering is

11:49:32 <ais523> the recursion itself becomes O(n) once you have the O(n log n) index of elements in order

11:50:08 <b_jonas> ais523: how does that work? how do you do a recursion "in that order"? a recursion would usually has to do descend to all infixes on one side, then all infixes on the other side.

11:50:41 <ais523> so you can do a recursion with a call stack, where you're doing the left subtree than the right subtree

11:50:51 <ais523> but you can also use a call queue, then you're effectively recursing over the tree breadth-first

11:51:50 <ais523> b_jonas: oh right, it's acutally n log n; there are n recursive calls, each of which take constant time, but the context switches take log n time

11:54:21 <ais523> here's the algorithm expanded: we have a set of infixes, initially just the initial list

11:54:44 <ais523> repeatedly, we locate the smallest element in any of the infixes, and split it into two infixes, the elements before that element, and the elements after that element

11:55:28 <ais523> if we sort the list in advance (remembering where the elements are in sorted order), "locate the smallest element in any of the infixes" (in terms of its position relative to the original list, and its value) is constant-time

11:56:16 <ais523> and if we maintain the list of infixes in sorted order (sorted using position in the initial list), using a self-balancing search tree, finding the correct infix based on the position of an element relative to the original list is O(log n)

11:56:38 <ais523> thus, the sort in advance is O(n log n), the analysis of all the infixes is also O(n log n), overall algorithm is therefore O(n log n)

11:58:19 <ais523> <the page b_jonas linked a little earlier> One of my favorite quotes from this paper: "Our attack can find collision [in MD4] with hand calculation."

11:58:39 <ais523> apparently, nowadays MD4 is so broken that finding collisions in it is actually faster than MD4'ing the two resulting strings to verify that their hashes are the same as each other

12:04:10 <b_jonas> ah, it's probably one of these scripts in this text file of completely undocumented and unreadable perl one-liners

12:04:29 <b_jonas> or more than one of them, they're probably multiple algorithms for the same thing

12:06:24 <b_jonas> perl -we 'use strict; use List::Util "max"; my @A=split/ /,<>; sub f { my(@a) = (0, @_, 0); my $m = 0; my @s; for my $j (0 .. @a-1) { while (@s and $a[$j] <= $a[$s[-1]]) { my $l = 1<@s?$s[-2]:0; my $t = ($j-$l-1)*($a[$s[-1]]); $m<$t and $m=$t; pop @s; } push @s, $j; } $m; } print f(@A), "\n";'

12:08:46 <b_jonas> the trick is that it's an amortized thingy, the body of the while loop pops an element from @s, so it runs exactly once for each value pushed, and we push a value in each iteration of the for loop, which runs n times

12:09:03 <ais523> it's O(n), the inner loop can't run any more iterations than the outer loop does because the outer loop pushes to @s exactly once and the inner loop pops from it exactly once

12:09:06 <b_jonas> so you have n iterations of the for loop, n iterations of the body of the while loop total

12:11:00 <ais523> I think your algorithm is basically the same as my O(n log n) algorithm except that it goes left to right, and manages to locate all the correct infixes as it does so, rather than needing an index calculated in advance

12:11:47 <b_jonas> that is likely, because neither uses any deep tricks about how the multiplication works apart from that it's monotone

12:12:37 <b_jonas> so you have to consider O(n) infixes, though you could consider O(n) more redundnatly

12:13:35 <ais523> hmm… I assume there are randomized hashing algorithms which are secure in the sense that if you don't know the seed, you can't create a collision, and that given a random seed, two strings are no more likely to collide than a birthday paradox suggests

12:16:38 <b_jonas> ais523: people usually use the Blake2 hashes with a secret seed for that, I think python has such an implementation for its dictionaries

12:17:13 <b_jonas> Blake2 is supposed to be cryptosecure, but it's still somewhat faster than SHA-2 or SHA-3 usually, and more flexible

12:18:05 <ais523> it's faster than blake2 partly because the number of rounds was reduced, but also because it can be vectorised

12:18:14 <esowiki> [[Esolang:Introduce yourself]] https://esolangs.org/w/index.php?diff=78096&oldid=78062 * Filexor * (+170) /* Introductions */

12:18:33 <ais523> although, the input needs to be at least 2KiB in order to get the main gain from the vectorisation, and there's some overhead introduced to make that possible

12:19:09 <b_jonas> as in, you need to do a single serial iteration to compute blake2 on a long input

12:19:24 <ais523> Blake3 can be vectorized the same way Blake2 is, but it's more efficient to use the parellelizability and just use the vector registers to SIMD many of the parallel streams of execution at once

12:20:02 <ais523> you could also use multiple CPUs or threads, but I don't consider that to actually speed up an algorithm, the same amount of CPU effort is expended

12:20:38 <ais523> or, rather, I think of the runtime of an algorithm as the integral of its CPU load dtime

12:20:53 <b_jonas> also the blake2 hashes are well-studied, which is the main advantage when you consider crypto hashes

12:20:58 <ais523> maybe I shouldn't, because in practice you rarely find things to fill the idle CPU cores with

12:21:39 <ais523> I might use it for non-crypto purposes, though, if I simply wanted a hash function that's unlikely to have collisions

12:23:29 <ais523> anyway, using a universal hash rather than cryptosecure hash means that there are certain useful properties that might not be ruled out

12:23:47 <ais523> for example, having an easy way to calculate the hash of A concat B if you know the hashes of A and B

12:31:31 <esowiki> [[PlusOrMinus]] M https://esolangs.org/w/index.php?diff=78097&oldid=75472 * PythonshellDebugwindow * (+550) Update year and add original version

12:41:29 <esowiki> [[Esolang:Introduce yourself]] https://esolangs.org/w/index.php?diff=78098&oldid=78096 * Blue screen of life * (+172) Adding intro

12:41:38 <esowiki> [[Befunge]] https://esolangs.org/w/index.php?diff=78099&oldid=77368 * Blue screen of life * (+72) Adding a shorter Hello World program without string reversion to the examples section.

12:47:08 <esowiki> [[XENBLN/Commands]] M https://esolangs.org/w/index.php?diff=78100&oldid=74681 * PythonshellDebugwindow * (+3) Update commands

14:11:18 <esowiki> [[User talk:Orby]] https://esolangs.org/w/index.php?diff=78101&oldid=78011 * OsmineYT * (-171)

15:11:17 <esowiki> [[Braindumbed]] N https://esolangs.org/w/index.php?oldid=78102 * Filexor * (+1501) Created page with "'''Braindumbed''' is esoteric instruction set architecture designed by [[User:Filexor]]. ==Language overview== Braindumbed generally runs with 65536 bits of address space with..."

15:36:59 <esowiki> [[Language list]] https://esolangs.org/w/index.php?diff=78103&oldid=78067 * Filexor * (+18)

17:30:17 <arseniiv> b_jonas: nah, the same, though I bought a new display cable and tomorrow hopefully I'll grab it at the store. The current one is DVI-HDMI which is suboptimal, I picked HDMI at both sides. Should have done it so long ago but at least late is better than never. The change wouldn't improve resolution and frequency but I think it would do better in subtle and oblique ways

17:30:50 <arseniiv> oh I need to reactivate my AutoHotkey script to be able to enter fancy apostrophes

17:33:49 <arseniiv> it takes a day or so to migrate all settings that need to, and to install a stray driver and to reinstall most of the programs which aren't portable. I should take notes to automate that for the next upgrade or migration but I'm lazy

20:03:59 <esowiki> [[UFSA]] M https://esolangs.org/w/index.php?diff=78104&oldid=74608 * PythonshellDebugwindow * (+61) /* One-time binary cat */ Add example

20:13:43 <esowiki> [[!@$%^&*()+/Algorithms]] https://esolangs.org/w/index.php?diff=78105&oldid=77980 * SunnyMoon * (-314) This does not work...

20:22:40 <esowiki> [[Taktentus]] M https://esolangs.org/w/index.php?diff=78106&oldid=74146 * PythonshellDebugwindow * (+61) /* Syntax */ Add to documentation

20:37:32 <esowiki> [[Taktentus]] M https://esolangs.org/w/index.php?diff=78107&oldid=78106 * PythonshellDebugwindow * (+57) /* Syntax */ Add note about comments

21:02:49 <esowiki> [[Voxvy]] N https://esolangs.org/w/index.php?oldid=78108 * PythonshellDebugwindow * (+334) Create

22:32:19 <esowiki> [[Talk:Anarchysm]] N https://esolangs.org/w/index.php?oldid=78109 * Heavpoot * (+142) Created page with "would it be considered ok to improve the english of this ~~~~"

22:35:13 <esowiki> [[Anarchysm]] https://esolangs.org/w/index.php?diff=78110&oldid=76549 * Heavpoot * (+182) bees

22:36:51 <esowiki> [[If the question specifies that the number of the words should be less than 3, and the number of words in your answer is larger than 3, your answer is automatically wrong.]] M https://esolangs.org/w/index.php?diff=78111&oldid=74257 * Heavpoot * (+32) i think you forgot an operator.

22:39:32 <esowiki> [[H]] M https://esolangs.org/w/index.php?diff=78112&oldid=71264 * Heavpoot * (+9) Fixed grammar

22:49:03 <esowiki> [[Revaver2pi]] M https://esolangs.org/w/index.php?diff=78113&oldid=73504 * Heavpoot * (+143) Changed wording, fixed grammar

22:51:26 <esowiki> [[TLOWScript]] M https://esolangs.org/w/index.php?diff=78114&oldid=56541 * Heavpoot * (-12) "compiler/interpreter" -> "implementation"

22:52:19 <esowiki> [[TLOWScript]] M https://esolangs.org/w/index.php?diff=78115&oldid=78114 * Heavpoot * (+6) Fixed grammar

22:54:37 <esowiki> [[Cod]] M https://esolangs.org/w/index.php?diff=78116&oldid=43386 * Heavpoot * (+76) Add link, fix typos/grammar