00:00:29 -!- esowiki1 has joined.
00:03:37 -!- esowiki has quit (Ping timeout: 260 seconds).
00:04:23 -!- esowiki1 has changed nick to esowiki.
00:45:13 -!- Lord_of_Life has quit (Ping timeout: 264 seconds).
00:49:11 -!- Lord_of_Life has joined.
01:09:45 -!- LKoen has quit (Quit: “It’s only logical. First you learn to talk, then you learn to think. Too bad it’s not the other way round.”).
02:49:04 -!- delta23 has quit (Quit: Leaving).
03:25:12 -!- aaaaaa has quit (Quit: leaving).
03:42:56 <zzo38> C doesn't have macros that are expanded only where a variable name is expected (similar to vardef in METAFONT).
04:28:22 <zzo38> I have read that cell phones cannot send/receive faxes due to the codec in use, but is the codec compatible with slow scan format?
04:35:15 -!- adu has joined.
05:07:03 -!- hendursaga has quit (Ping timeout: 240 seconds).
06:20:39 -!- hendursaga has joined.
06:46:25 -!- imode has quit (Ping timeout: 264 seconds).
06:59:46 -!- adu has quit (Quit: adu).
07:16:25 -!- Lord_of_Life has quit (Changing host).
07:16:25 -!- Lord_of_Life has joined.
07:18:04 -!- Sgeo has quit (Read error: Connection reset by peer).
07:50:51 -!- moony has quit (Remote host closed the connection).
07:52:56 -!- moony has joined.
08:01:43 -!- hendursaga has quit (Ping timeout: 240 seconds).
08:07:34 -!- hendursaga has joined.
08:15:59 -!- hendursa1 has joined.
08:18:23 -!- hendursaga has quit (Ping timeout: 240 seconds).
08:22:43 -!- hendursa1 has quit (Ping timeout: 240 seconds).
08:26:05 -!- t20kdc has joined.
08:38:46 -!- cpressey has joined.
08:40:38 -!- hendursa1 has joined.
08:40:52 -!- hendursa1 has quit (Remote host closed the connection).
08:41:38 -!- hendursa1 has joined.
08:47:09 <cpressey> Defining a function is like adding an axiom, in the sense that it asserts that the hitherto-meaningless name of the function now has such-and-such meaning.
08:52:46 <Taneb> cpressey: I think it's more analogous to stating a theorem
08:52:58 <Taneb> With adding a builtin function like adding an axiom
08:56:48 <cpressey> Most of the functions I write surely aren't theorems
09:00:22 -!- Arcorann_ has joined.
09:00:25 -!- Arcorann has quit (Ping timeout: 240 seconds).
09:01:13 <Taneb> They might just not be very good theorems
09:01:57 <int-e> There's a definitional principle by specification: If you prove (exists x. P(x)), then that allows you to introduce a fresh constant k, and assert P(k) as an axiom.
09:02:41 <int-e> The proof of exists x. P(x) is fairly trivial if P(x) has the form x = foo with x not occurring in foo.
09:09:57 <int-e> Hmm. I believe Isabelle/Pure really only has the special case of definition by equality. Isabelle/HOL adds a Hilbert choice operator, so you get the specification principle by defining k = SOME x. P(x), and noting that exists x. P(x) ==> P(SOME x. P(x)) <==> P(k).
09:10:17 <rain1> \x. x has type P -> P which is the tautology theorem
09:10:22 <int-e> No clue about other interactive theorem provers.
09:10:55 <rain1> \f. \g. \x. g(f(x)) has type (P -> Q) -> (Q -> R) -> (P -> R) which is a tautology
09:11:10 <int-e> Especially the dependently typed ones.
09:11:15 <rain1> if you read -> as implies it works
09:12:05 <int-e> rain1: not sure what your point is
09:12:29 <int-e> @djinn (p -> q) -> (q -> r) -> p -> r
09:12:32 <rain1> it's for cpressey to see why functions are proofs of theorems
09:12:58 <rain1> just a couple simple examples
09:13:10 <int-e> But that is entirely independent of how you deal with definitions.
09:13:44 <cpressey> There are lots of functions which are not proofs of theorems.
09:13:45 <int-e> And you want proper definitions in every practical proof framework because they are crucial for compression.
09:14:11 <int-e> (Which may be the same if you have dependent types.)
09:14:50 <rain1> cpressey, that is true...
09:14:52 <int-e> In *theory*, of course you can treat definitions and lemmas as abbreviations to be unfolded.
09:14:57 <rain1> like, infinite loops
09:15:25 <rain1> but if you stick to functions with no infinite loops they can be read as theorems (even if they are often boring ones)
09:15:49 <Taneb> I think all functions are proofs of theorems, just not necessarily in a consistent logic
09:16:05 <cpressey> "can be read as theorems that no one cares about", I won't argue with that
09:16:08 <int-e> Most of the functions I write are broing when viewed as theorems.
09:17:02 <int-e> When I write a function of type Nat -> Nat, I usually care about the values. But usually not enough to try to specify what they are in the type.
09:17:34 <int-e> So... functions are more than just proofs.
09:18:18 <cpressey> If I use a dynamically typed language, then I am using a "degenerate type system with only one type" in the view of type theory, and I imagine that make it difficult to form any interesting proofs. But I can *program* in it just fine.
09:18:41 <Taneb> I still assert that functions are far more like theorems than axioms
09:18:50 <int-e> And yes, indeed, the potential for nontermination makes the picture of functions as proofs even more inaccurate.
09:19:09 <int-e> Taneb: It's not about functions, at least not as I read it.
09:19:52 <int-e> Taneb: It's about giving a *name* to a value, which is the essence of a definition of a function.
09:21:33 <int-e> Of course this is my own interpretation of something cpressey wrote. I don't know whether it's the intended one.
09:22:51 <cpressey> int-e: I feel like I *kind of* follow your explanation with P(x), if, to prove that x (for example factorial) exists you only need to supply the definition of it (a term exists that we can call "factorial")
09:23:26 <cpressey> I suppose that's a kind of existence proof.
09:24:33 <cpressey> But then, what is P() in general? It seems like it could be the evaluation relation for your environment?
09:25:21 <cpressey> OK, then I probably follow you.
09:25:31 <int-e> it could be... P(f) <--> f(0) = 1 /\ forall x. f(x+1) = f(x)*(x+1)
09:28:15 <int-e> (Which specifies the factorial function, assuming f : nat -> nat, but makes existence of f non-trivial.)
09:28:28 <rain1> yes functions are more than just their types
09:28:50 <rain1> types are just a blueprint of what the function does structurally
09:30:31 <int-e> cpressey: What I'm saying is very much inspired by how Isabelle/HOL deals with these things.
09:32:23 <int-e> For example, there is a function definition package that *can* deal with the above specification of the factorial function. Internally it'll extract a dependency relation (here, x+1 -> x; in general those will be tuples of all the arguments) and shows that it is well-founded (for which it has various heuristics)... and then one can show that f exists by induction on that relation.
09:32:49 <int-e> (This is not entirely accurate, but close enough for an overview.)
09:34:09 <int-e> hmm. let's s/overview/outline/
09:35:07 <rain1> although some functions are uniquely determined by their type
09:35:25 <rain1> up to extraneous lambda terms and other "nothings"
09:36:45 <cpressey> rain1: Yeah, and even when it's not uniquely determined, you get at least one theorem "for free" by parametricity, as I understand it.
09:37:29 <int-e> @djinn (a -> a) -> a -> a
09:37:49 <rain1> that's true! parametricity is pretty amazing
09:37:59 <int-e> (unsurprisingly, the preferred Church numberal is 1.)
09:38:06 <rain1> if a function works for all types you know it doesn't inspect and case analysis on data of that type
09:38:14 <rain1> which lets you say a lot about a function
09:38:24 <int-e> "numberal". Finally a typo that makes sense.
09:38:40 <int-e> @free (a -> a) -> a -> a
09:38:40 <lambdabot> Try `free <ident>` or `free <ident> :: <type>`
09:38:50 <int-e> @free church :: (a -> a) -> a -> a
09:38:50 <lambdabot> f . g = h . f => f . church g = church h . f
09:39:23 <int-e> induction, more or less.
09:41:22 <rain1> wow i never knew that!
09:46:24 <cpressey> On the other hand, it's possible for a function to uphold an invariant that cannot be captured by any type.
10:17:46 -!- HaJones has joined.
10:21:24 <b_jonas> cpressey: definitions aren't like axioms in that they don't let you prove things that were meaningful but you couldn't have proved from your existing axioms.
10:23:50 <int-e> spoken like a logician
10:29:43 <cpressey> My comparison to axioms was meant to highlight that before you give the definition of factorial there's no way in hell you can deduce the truth or falisty of the statement "factorial(5)=120"
11:22:09 -!- ais523 has joined.
11:22:24 <ais523> b_jonas: you can imagine a modification of, say, the SHA-1 or SHA-256 algorithm
11:22:52 <ais523> in which the final step is "check if this hash has ever been output for a different input file; if so, increment the hash and repeat this step"
11:23:06 <ais523> now you have an algorithm that is precisely defined and collision-free
11:23:11 <ais523> (if a little difficult to use in practice)
11:23:16 <ais523> this seems quite similar to what your magic box is doing
11:23:26 <ais523> (although not the same, obviously)
11:23:38 <b_jonas> ais523: I'm not sure if it's well-defined whether a hash has been output, and which of two hashes has been output first.
11:24:14 <b_jonas> ais523: and no, (a) the magic box does more than just produce hashes, and (b) it uses a longer digest and one that's cryptographically safe so you can't produce collisions
11:24:45 <ais523> the thing that the magic box produces is a hash by definition, isn't it? except that it might not be deterministic
11:24:55 <ais523> (when using it in "store" mode)
11:25:15 <b_jonas> ais523: it need not be. it could be just an entirely random value that it doesn't compute from the input or anything
11:25:16 <ais523> the magic boxes are slightly more useful if you also add the proviso that the same code will be returned for anyone, if they provide the same file as input
11:25:35 <ais523> I would argue that *even if* it isn't computed from the input, it's still a hash function
11:25:40 <b_jonas> ais523: sort of but not really. if you want to check for collisions, you can just use an ordinary hash independent of the magic box.
11:25:54 <b_jonas> ok, then it's a hash function.
11:26:15 <ais523> fwiw, I have wanted a similar operation to this in a golfing language
11:26:26 <ais523> it takes an arbitrary data structure and returns a single integer, consecutive from 0
11:26:39 <b_jonas> interning arbitrary data structures?
11:26:54 <ais523> of course, there is an uninterning operation, too
11:27:18 <ais523> another idea I had, along similar lines, was a function that interns pairs of integers
11:27:30 <ais523> when you have this, your language no longer needs data structures, as you can do cons just on integers
11:27:55 <ais523> (there are bijections between Z² and Z, so I guess you wouldn't even need to use interning if you didn't mind the size of the numbers exploding)
11:28:14 <b_jonas> ais523: that's what cons does if there's no garbage collection, except I think if you say interning, you want a deterministic function
11:28:29 <b_jonas> as for numbers exploding, yes, that's what (0) does
11:28:48 <ais523> well, deterministic within one run of the program
11:29:35 <ais523> anyway, I think "content-addressable storage" is the general term for your magic boxes, except that two different CASes are not usually linked to each other
11:29:46 <b_jonas> that sounds typical for these golf languages, normally you wouldn't want to intern every pair, because it's sort of slow, but in a golf language you might not care
11:30:16 <ais523> yes, this seems like an operation that is probably usable by golfing languages only
11:30:46 <b_jonas> ais523: yes, and https://valerieaurora.org/monkey.html is the clear explanation of them
11:31:54 <b_jonas> the magic boxes are not only linked, but can also store practically any amount of data (limited by the boxes' bandwidth) forever for free, unlike in the real world
11:32:45 <b_jonas> ais523: rsync => not in the traditional system, but it uses clever use of hashes to likely find matching infixes among files, just like how some delta compression algorithms do
11:33:03 <b_jonas> and in fact even some ordinary compression algorithms do that
11:33:47 <ais523> Riot Games wrote an article about how they do incremental updates, without needing to know what's already installed on the target computer
11:33:55 <b_jonas> because deterministically finding all common infixes is too expensive, but finding enough of them to compress well in typical practical cases is good enough
11:34:29 <ais523> it works by breaking files into sections based on their content, and sending the hash of each section that should be in the new files
11:34:33 <b_jonas> we'll probably learn all about this in Knuth vol 5 by the way
11:34:48 <ais523> then the existing files are used to find as many of those sections as possible, and the ones which aren't found are downloaded
11:35:00 <b_jonas> ais523: yeah, that's sort of what rsync does
11:35:12 <ais523> right, that's why I mentioned it
11:35:33 <ais523> incidentally, A Pear Tree's current algorithm for finding sections of files with a CRC of 0 is quadratic
11:35:41 <ais523> but I think it's possible to do it in linear time
11:36:10 <ais523> so maybe there isn't even a need to define the sections in terms of content, you could just do it in terms of, say, a CRC-128
11:36:39 <ais523> (you would expect no random collisions with one of those; even though it isn't cryptographically secure, it seems likely that any actual collisions would have been planted by a human)
11:37:00 <ais523> err, it's probably actually n log n rather than linear
11:37:13 <b_jonas> ais523: ah yeah, that reminds me of one of my favorite algorithmic problems. you get a vector of nonnegative machine-sized integers, find the infix in that vector that maximizes the product of the length and the minimum of that infix.
11:37:41 <b_jonas> if n is the length of the vector, there's a trivial O(n**3) algorithm, an easy O(n**2) algorithm, and a tricky O(n) algorithm
11:38:28 -!- cpressey has quit (Quit: WeeChat 1.9.1).
11:38:30 <ais523> "infix" = contiguous substring, right?
11:40:02 <ais523> there's an easy O(nk) algorithm too, which might or might not be faster depending on the size of the numbers
11:40:34 <ais523> largest integer in the vector
11:41:15 <ais523> (faster than the O(n²) algorithm, that is)
11:42:53 <ais523> I think the easy O(n²) algorithm is "for each element in the vector, scan forwards until you find a smaller element, then take the maximum among the infixes calculated this way"
11:43:13 <b_jonas> in case you want to experiment, a test vector is [138, 222, 263, 156, 176, 58, 116, 211, 293, 481, 352, 217, 454, 639, 423, 580, 819, 716, 845, 999, 936, 714, 621, 569, 361, 175, 52, 57, 33] and the largest product is 5121, I have two longer test vectors that I can paste
11:43:28 <b_jonas> yes, that is the easy O(n**2) algorithm
11:44:07 <ais523> the easy O(nk) algorithm is "for each k from 1 to the highest element in the vector, look for maximum-length infixes whose minimum is k, then take the maximum among the infixes calculated this way"
11:45:54 -!- sprocklem has quit (Ping timeout: 258 seconds).
11:46:04 <ais523> easy O(n log n): sort the indexes of the vector by their values, then consider the infixest between the smallest and second-smallest, second-smallest and third-smallest, etc., values
11:46:09 -!- sprocklem has joined.
11:46:20 <ais523> I'm not 100% sure that's correct though
11:46:48 <b_jonas> how do you compute the minimum of those infixes?
11:46:48 <ais523> ah no, you have to extend each of those infixes outwards until you reach a value that's smaller
11:47:04 <ais523> ah right, yes, that hurts the complexity too
11:48:19 <ais523> more difficult O(n log n): consider the entire list as an infix, and recursively apply this algorithm to prefix before the smallest element, suffix after the smallest element
11:48:32 <ais523> this is O(n log n) if you speed it up by locating all the element locations in order from smallest to largest first
11:48:39 <ais523> and do the recursion in that order
11:48:47 <ais523> because, then you'll always know what the smallest element in the infix you're considering is
11:49:32 <ais523> the recursion itself becomes O(n) once you have the O(n log n) index of elements in order
11:50:08 <b_jonas> ais523: how does that work? how do you do a recursion "in that order"? a recursion would usually has to do descend to all infixes on one side, then all infixes on the other side.
11:50:41 <ais523> so you can do a recursion with a call stack, where you're doing the left subtree than the right subtree
11:50:51 <ais523> but you can also use a call queue, then you're effectively recursing over the tree breadth-first
11:51:04 <ais523> you can generalise this into doing a recursion in arbitrary order
11:51:12 <b_jonas> and how will that work in O(n) time?
11:51:16 -!- sprocklem has quit (Ping timeout: 256 seconds).
11:51:44 <b_jonas> that sounds like it might work but I'm not convinced
11:51:50 <ais523> b_jonas: oh right, it's acutally n log n; there are n recursive calls, each of which take constant time, but the context switches take log n time
11:51:53 -!- sprocklem has joined.
11:52:13 <ais523> because you need to find the right stack/queue frame to aim for
11:54:21 <ais523> here's the algorithm expanded: we have a set of infixes, initially just the initial list
11:54:27 <b_jonas> meanwhile Im' trying to find my old implementation of the O(n) algorithm...
11:54:44 <ais523> repeatedly, we locate the smallest element in any of the infixes, and split it into two infixes, the elements before that element, and the elements after that element
11:54:48 <b_jonas> but I'm not sure I still hvae it
11:54:58 <ais523> and repeat until we have no elements left
11:55:28 <ais523> if we sort the list in advance (remembering where the elements are in sorted order), "locate the smallest element in any of the infixes" (in terms of its position relative to the original list, and its value) is constant-time
11:56:16 <ais523> and if we maintain the list of infixes in sorted order (sorted using position in the initial list), using a self-balancing search tree, finding the correct infix based on the position of an element relative to the original list is O(log n)
11:56:38 <ais523> thus, the sort in advance is O(n log n), the analysis of all the infixes is also O(n log n), overall algorithm is therefore O(n log n)
11:56:54 <ais523> constant factor seems bad, though :-D
11:58:19 <ais523> <the page b_jonas linked a little earlier> One of my favorite quotes from this paper: "Our attack can find collision [in MD4] with hand calculation."
11:58:39 <ais523> apparently, nowadays MD4 is so broken that finding collisions in it is actually faster than MD4'ing the two resulting strings to verify that their hashes are the same as each other
12:04:10 <b_jonas> ah, it's probably one of these scripts in this text file of completely undocumented and unreadable perl one-liners
12:04:29 <b_jonas> or more than one of them, they're probably multiple algorithms for the same thing
12:05:26 <b_jonas> but is this actually the O(n) solution?
12:06:22 <b_jonas> it says this where the input is the numbers as space-separated decimal
12:06:24 <b_jonas> perl -we 'use strict; use List::Util "max"; my @A=split/ /,<>; sub f { my(@a) = (0, @_, 0); my $m = 0; my @s; for my $j (0 .. @a-1) { while (@s and $a[$j] <= $a[$s[-1]]) { my $l = 1<@s?$s[-2]:0; my $t = ($j-$l-1)*($a[$s[-1]]); $m<$t and $m=$t; pop @s; } push @s, $j; } $m; } print f(@A), "\n";'
12:07:45 <ais523> that looks like an unused import to me
12:07:51 <ais523> unless there's some usage of "max" I'm missing
12:08:02 <b_jonas> yeah, like I said, undocumented and messy, wrote it for myself
12:08:09 <b_jonas> but I think it's the O(n) solution
12:08:46 <b_jonas> the trick is that it's an amortized thingy, the body of the while loop pops an element from @s, so it runs exactly once for each value pushed, and we push a value in each iteration of the for loop, which runs n times
12:08:56 <esowiki> [[Special:Log/newusers]] create * Filexor * New user account
12:09:03 <ais523> it's O(n), the inner loop can't run any more iterations than the outer loop does because the outer loop pushes to @s exactly once and the inner loop pops from it exactly once
12:09:06 <b_jonas> so you have n iterations of the for loop, n iterations of the body of the while loop total
12:09:43 <b_jonas> and I have more test vectors here
12:11:00 <ais523> I think your algorithm is basically the same as my O(n log n) algorithm except that it goes left to right, and manages to locate all the correct infixes as it does so, rather than needing an index calculated in advance
12:11:01 <b_jonas> (and outputs for them, but you could get each with the quadratic time algorithm)
12:11:13 <ais523> but both algorithms look at the same n infixes
12:11:47 <b_jonas> that is likely, because neither uses any deep tricks about how the multiplication works apart from that it's monotone
12:12:05 -!- HaJones has quit (Quit: Leaving).
12:12:37 <b_jonas> so you have to consider O(n) infixes, though you could consider O(n) more redundnatly
12:13:35 <ais523> hmm… I assume there are randomized hashing algorithms which are secure in the sense that if you don't know the seed, you can't create a collision, and that given a random seed, two strings are no more likely to collide than a birthday paradox suggests
12:13:44 <ais523> how much faster are they than cryptosecure hashes?
12:14:39 <ais523> Wikipedia calls this "universal hashing"
12:16:08 -!- Frater_EST has joined.
12:16:38 <b_jonas> ais523: people usually use the Blake2 hashes with a secret seed for that, I think python has such an implementation for its dictionaries
12:16:53 -!- Frater_EST has left.
12:17:13 <b_jonas> Blake2 is supposed to be cryptosecure, but it's still somewhat faster than SHA-2 or SHA-3 usually, and more flexible
12:18:05 <ais523> it's faster than blake2 partly because the number of rounds was reduced, but also because it can be vectorised
12:18:14 <esowiki> [[Esolang:Introduce yourself]] https://esolangs.org/w/index.php?diff=78096&oldid=78062 * Filexor * (+170) /* Introductions */
12:18:33 <ais523> although, the input needs to be at least 2KiB in order to get the main gain from the vectorisation, and there's some overhead introduced to make that possible
12:18:37 <b_jonas> no, I think blake2 can be vectorized too. blake3 can be parallelized.
12:18:49 <ais523> it's a different sort of vectorization
12:19:09 <b_jonas> as in, you need to do a single serial iteration to compute blake2 on a long input
12:19:24 <ais523> Blake3 can be vectorized the same way Blake2 is, but it's more efficient to use the parellelizability and just use the vector registers to SIMD many of the parallel streams of execution at once
12:19:33 <b_jonas> but that is an iteration with a rather fast constant factor as crypto hashes go
12:20:02 <ais523> you could also use multiple CPUs or threads, but I don't consider that to actually speed up an algorithm, the same amount of CPU effort is expended
12:20:05 <b_jonas> but I don't this this comes up often as an advantage
12:20:18 <b_jonas> it's something that rarely helps in practice
12:20:38 <ais523> or, rather, I think of the runtime of an algorithm as the integral of its CPU load dtime
12:20:53 <b_jonas> also the blake2 hashes are well-studied, which is the main advantage when you consider crypto hashes
12:20:58 <ais523> maybe I shouldn't, because in practice you rarely find things to fill the idle CPU cores with
12:21:20 <ais523> I wouldn't use Blake3 for non-crypto work yet
12:21:23 <ais523> err, for crypto work yet
12:21:39 <ais523> I might use it for non-crypto purposes, though, if I simply wanted a hash function that's unlikely to have collisions
12:23:29 <ais523> anyway, using a universal hash rather than cryptosecure hash means that there are certain useful properties that might not be ruled out
12:23:47 <ais523> for example, having an easy way to calculate the hash of A concat B if you know the hashes of A and B
12:31:31 <esowiki> [[PlusOrMinus]] M https://esolangs.org/w/index.php?diff=78097&oldid=75472 * PythonshellDebugwindow * (+550) Update year and add original version
12:33:24 <esowiki> [[Special:Log/newusers]] create * Blue screen of life * New user account
12:41:29 <esowiki> [[Esolang:Introduce yourself]] https://esolangs.org/w/index.php?diff=78098&oldid=78096 * Blue screen of life * (+172) Adding intro
12:41:38 <esowiki> [[Befunge]] https://esolangs.org/w/index.php?diff=78099&oldid=77368 * Blue screen of life * (+72) Adding a shorter Hello World program without string reversion to the examples section.
12:47:08 <esowiki> [[XENBLN/Commands]] M https://esolangs.org/w/index.php?diff=78100&oldid=74681 * PythonshellDebugwindow * (+3) Update commands
13:13:25 -!- hendursa1 has quit (Quit: hendursa1).
13:13:44 -!- hendursaga has joined.
13:14:03 -!- File_xor has joined.
13:15:13 -!- File_xor has quit (Remote host closed the connection).
13:27:00 -!- cpressey has joined.
14:11:18 <esowiki> [[User talk:Orby]] https://esolangs.org/w/index.php?diff=78101&oldid=78011 * OsmineYT * (-171)
14:19:40 -!- Arcorann_ has quit (Read error: Connection reset by peer).
14:24:16 -!- cpressey has quit (Ping timeout: 256 seconds).
14:25:56 -!- cpressey has joined.
14:32:45 -!- Sgeo has joined.
14:38:34 -!- Colonel_Kernel has joined.
14:38:40 -!- Colonel_Kernel has left.
15:11:17 <esowiki> [[Braindumbed]] N https://esolangs.org/w/index.php?oldid=78102 * Filexor * (+1501) Created page with "'''Braindumbed''' is esoteric instruction set architecture designed by [[User:Filexor]]. ==Language overview== Braindumbed generally runs with 65536 bits of address space with..."
15:36:59 <esowiki> [[Language list]] https://esolangs.org/w/index.php?diff=78103&oldid=78067 * Filexor * (+18)
15:41:02 -!- Filexor has joined.
15:42:40 -!- Filexor has quit (Remote host closed the connection).
15:48:04 -!- arseniiv has joined.
16:00:52 -!- cpressey has quit (Quit: WeeChat 1.9.1).
16:21:39 -!- FreeFull has joined.
16:33:40 -!- Melvar has quit (Quit: WeeChat 2.8).
16:41:04 <b_jonas> arseniiv: is it a new computer hardware too?
16:54:53 -!- ATMunn has quit (Ping timeout: 246 seconds).
16:55:58 -!- ATMunn has joined.
17:05:53 -!- LKoen has joined.
17:08:40 -!- arseniiv has quit (Ping timeout: 258 seconds).
17:24:32 -!- arseniiv has joined.
17:30:17 <arseniiv> b_jonas: nah, the same, though I bought a new display cable and tomorrow hopefully I'll grab it at the store. The current one is DVI-HDMI which is suboptimal, I picked HDMI at both sides. Should have done it so long ago but at least late is better than never. The change wouldn't improve resolution and frequency but I think it would do better in subtle and oblique ways
17:30:50 <arseniiv> oh I need to reactivate my AutoHotkey script to be able to enter fancy apostrophes
17:33:49 <arseniiv> it takes a day or so to migrate all settings that need to, and to install a stray driver and to reinstall most of the programs which aren't portable. I should take notes to automate that for the next upgrade or migration but I'm lazy
17:34:54 -!- FreeFull has quit.
17:36:49 -!- FreeFull has joined.
17:43:15 <b_jonas> I'll probably grab the stuff I bought on Monday or Tuesday
17:58:15 -!- kurisu has joined.
18:06:19 -!- Melvar has joined.
18:25:31 -!- arseniiv has quit (Quit: gone too far).
18:59:08 -!- ais523 has quit (Quit: quit).
19:03:27 -!- hendursaga has quit (Remote host closed the connection).
19:04:19 -!- hendursaga has joined.
19:19:26 -!- imode has joined.
19:53:44 -!- kurisu has quit (Remote host closed the connection).
20:03:59 <esowiki> [[UFSA]] M https://esolangs.org/w/index.php?diff=78104&oldid=74608 * PythonshellDebugwindow * (+61) /* One-time binary cat */ Add example
20:13:43 <esowiki> [[!@$%^&*()+/Algorithms]] https://esolangs.org/w/index.php?diff=78105&oldid=77980 * SunnyMoon * (-314) This does not work...
20:22:40 <esowiki> [[Taktentus]] M https://esolangs.org/w/index.php?diff=78106&oldid=74146 * PythonshellDebugwindow * (+61) /* Syntax */ Add to documentation
20:37:32 <esowiki> [[Taktentus]] M https://esolangs.org/w/index.php?diff=78107&oldid=78106 * PythonshellDebugwindow * (+57) /* Syntax */ Add note about comments
20:44:27 -!- cherrybean has joined.
20:51:18 -!- cherrybean has left ("Leaving").
21:02:49 <esowiki> [[Voxvy]] N https://esolangs.org/w/index.php?oldid=78108 * PythonshellDebugwindow * (+334) Create
21:08:23 -!- Lord_of_Life has quit (Read error: Connection reset by peer).
21:09:15 -!- Lord_of_Life has joined.
21:09:16 -!- Lord_of_Life has quit (Changing host).
21:09:16 -!- Lord_of_Life has joined.
21:46:26 -!- LKoen has quit (Remote host closed the connection).
21:52:57 -!- tromp has quit (Remote host closed the connection).
22:23:36 -!- tromp has joined.
22:28:43 -!- tromp has quit (Ping timeout: 260 seconds).
22:32:19 <esowiki> [[Talk:Anarchysm]] N https://esolangs.org/w/index.php?oldid=78109 * Heavpoot * (+142) Created page with "would it be considered ok to improve the english of this ~~~~"
22:33:43 -!- t20kdc has quit (Remote host closed the connection).
22:35:13 <esowiki> [[Anarchysm]] https://esolangs.org/w/index.php?diff=78110&oldid=76549 * Heavpoot * (+182) bees
22:36:51 <esowiki> [[If the question specifies that the number of the words should be less than 3, and the number of words in your answer is larger than 3, your answer is automatically wrong.]] M https://esolangs.org/w/index.php?diff=78111&oldid=74257 * Heavpoot * (+32) i think you forgot an operator.
22:39:32 <esowiki> [[H]] M https://esolangs.org/w/index.php?diff=78112&oldid=71264 * Heavpoot * (+9) Fixed grammar
22:44:48 -!- Arcorann_ has joined.
22:49:03 <esowiki> [[Revaver2pi]] M https://esolangs.org/w/index.php?diff=78113&oldid=73504 * Heavpoot * (+143) Changed wording, fixed grammar
22:51:26 <esowiki> [[TLOWScript]] M https://esolangs.org/w/index.php?diff=78114&oldid=56541 * Heavpoot * (-12) "compiler/interpreter" -> "implementation"
22:52:19 <esowiki> [[TLOWScript]] M https://esolangs.org/w/index.php?diff=78115&oldid=78114 * Heavpoot * (+6) Fixed grammar
22:54:37 <esowiki> [[Cod]] M https://esolangs.org/w/index.php?diff=78116&oldid=43386 * Heavpoot * (+76) Add link, fix typos/grammar
22:57:44 -!- deltaepsilon23 has joined.
22:58:40 -!- tromp has joined.
23:00:47 -!- deltaepsilon23 has changed nick to delta23.
23:03:24 -!- tromp has quit (Ping timeout: 256 seconds).
23:09:00 -!- aaaaaa has joined.
23:22:16 -!- FreeFull has quit.
23:52:55 -!- tromp has joined.