←2024-02-19 2024-02-20 2024-02-21→ ↑2024 ↑all
01:02:14 <esolangs> [[Adj]] N https://esolangs.org/w/index.php?oldid=123663 * BestCoder * (+374) Created page with "Adj (a is ADd a to b, then Jump to c) aka a=a+b; goto c command = ADJ a b c label = x: add only = ADJ a b X jump only = ADJ X X c output = ADJ 0 b c or ADJ 0 b X input = ADJ 1 b c or ADJ 1 b X = Examples = == Add 1 and 1 == ADJ a 1 X ADJ b 1 X ADJ a b X ADJ 0 a X == Ad
01:27:26 -!- amby has quit (Quit: so long suckers! i rev up my motorcylce and create a huge cloud of smoke. when the cloud dissipates im lying completely dead on the pavement).
01:49:55 -!- Lord_of_Life has quit (Ping timeout: 268 seconds).
01:50:45 -!- Lord_of_Life has joined.
02:30:35 -!- Noisytoot has quit (Ping timeout: 264 seconds).
02:44:58 -!- Noisytoot has joined.
03:32:25 -!- ^[ has quit (Quit: ^[).
04:30:42 -!- Noisytoot has quit (Remote host closed the connection).
04:32:49 -!- APic has quit (Ping timeout: 264 seconds).
04:47:06 -!- Noisytoot has joined.
05:04:33 -!- SGautam has joined.
05:15:02 -!- APic has joined.
05:55:36 -!- ais523 has joined.
06:05:12 <Swyrl> What's the fastest Thue interpreter in existence?
06:29:12 -!- ^[ has joined.
06:32:57 <ais523> Swyrl: most of the work on Thue interpreters was over a decade ago, so I'm not sure anyone remembers that any more
06:33:12 <ais523> and I'm not sure if any were optimised for speed
06:33:15 <Swyrl> Thue interpreter or compiler, I guess.
06:33:27 <ais523> it wouldn't surprise me if the best approach would be to write a new one, if you need speed
06:34:42 <ais523> (also, writing fast interpreters for tarpits can be very difficult because many tarpit programs store things in very inefficient encodings, which an optimising interpreter can reverse-engineer and store more efficiently for a huge speedup, but that's generally very difficult to implement)
06:36:20 <Swyrl> Yeah, there's only so much you can do to make the core operations fast before you hit the "bottom".
06:36:57 <ais523> right – optimising the core operations in a tarpit is sometimes helpful, but working out how they optimise as groups and optimising the group gives a much larger benefit
06:37:36 <ais523> e.g. in brainfuck, a good optimising compiler should optimise code like [->+++<] into a multiplication by 3, and that will inherently be much faster than simply trying to implement [ and - and > and + and < as efficiently as possible#
06:39:19 <ais523> I wrote an optimizing interpreter for The Waterfall Model named Ratiofall, which works by analyzing what a loop is doing and attempting to shortcut as many iterations of the loop as possible, and this works recursively, so often even complex and nested loops can be optimized down to a single constant-time operation
06:39:48 <ais523> (although of course it's impossible, both in general and in this particular case, to make this sort of thing work for all loops)
06:40:03 <Swyrl> Hard to do things like that in Thue and other string rewrite languages unless you write rules in a specific form.
06:40:10 <ais523> with Thue this sort of thing is harder because it's hard to even identify where the loops are
06:41:22 <ais523> oh, another thing that can make Thue hard to implement efficiently is that the search string and replacement can be different lengths, which means that you need to be able to insert into the middle of a string
06:41:53 <Swyrl> Yeah. You solve that with a deque.
06:43:01 <ais523> I don't think a deque would handle that problem – it can only do fast different-length replacements at one particular point in the string (typically the ends), so you would have to scan the entire string repeatedly changing the replacement point
06:43:10 <Swyrl> A deque and a trie are as fast as I've gotten.
06:43:12 <Swyrl> Yes.
06:43:32 <Swyrl> Throw your rules into a trie, with the leaves being the right hand sides of your rules.
06:43:52 <Swyrl> At any evaluation step, try to match the head of the deque with the trie.
06:43:57 <Swyrl> If you can't, advance one character, try again.
06:44:29 <Swyrl> If you can, dequeue the pattern length, and then enqueue the right hand side to the head of the deque.
06:44:31 <ais523> so I think there are two basic models you could use: one is the "iterated finite-state transducer" model which I think is what you are using (although it doesn't easily work for the "standard" probabilistic variant of Thue, it works for "Thue with unspecified evaluation order")
06:44:59 <Swyrl> Yeah, the latter bit.
06:45:10 <Swyrl> You can also use the trie to compile to a sequence of nested switch/case statements.
06:45:11 <ais523> and the other would be to have some sort of rope to hold the string, together with an index of what locations matched the LHS of a rule
06:45:13 <Swyrl> Or nested ifs.
06:45:27 <ais523> that would be faster in cases where most of the string doesn't match most of the time
06:45:45 <ais523> I think the trie would ideally be compiled into a finite-state machine
06:46:17 <Swyrl> What would be the difference?
06:46:23 <ais523> https://en.wikipedia.org/wiki/Boyer%E2%80%93Moore_string-search_algorithm
06:46:30 <Swyrl> Time complexity is the same-ish.
06:47:18 <ais523> with the finite-state machine time complexity is O(n) where n is the length of the state string, with the trie it's O(nx) where x is the length of the longest search string
06:47:38 <ais523> although, x is a constant for any given program, so it doesn't change the computational complexity of any given program
06:47:48 <ais523> still, not having to rewind after a failed match makes it faster in practice
06:47:49 <Swyrl> I was going to say.
06:47:50 <Swyrl> It's still linear.
06:48:11 <Swyrl> Why would you rewind? Failure means you advance by one character by "rolling" the deque.
06:48:37 <ais523> say your rule is aaab::=c and your search string is aaaaaab
06:48:38 <Swyrl> Or are you talking about a rewind in terms of control flow.
06:48:49 <Swyrl> Right.
06:48:59 <ais523> first you match aaab against the first four characters of the string, then against characters 2-5, then against characters 3-6, and so on
06:49:03 <ais523> so each character is being checked multiple times
06:49:20 <Swyrl> Mmm.
06:49:42 <ais523> a well-designed finite-state machine can check aaab against characters 1-4, then just check b against 5, because the information that we just saw 3 'a's is already encoded in the machine's state
06:49:50 <Swyrl> Yeah, if we divide it like.. `aaa aaab`.
06:50:18 <Swyrl> You'd traverse `a -> a -> a -> b` and then fail on `a`, advance one character.
06:50:48 <ais523> I guess you can think of a finite state machine as checking from all possible starting positions in parallel
06:51:16 <Swyrl> Yeah you'd have to go through.. eugh, 4 comparisons each.
06:51:19 <Swyrl> Hm.
06:51:21 <Swyrl> Yeah.
06:51:57 <Swyrl> I feel like at that point you're describing regular expressions with string replacement states.
06:52:19 <ais523> right, this is why I said "finite-state transducer" which is effectively the mathematical formalization of that
06:52:30 <Swyrl> And you can morph a trie into a DFA.
06:52:59 <Swyrl> It's just merging common prefixes, but..
06:53:02 <Swyrl> Err, suffixes.
06:53:54 <ais523> the trie already merges common prefixes – the change to a DFA is in working out what part of the tree to return to after a mismatch (which might not necessarily be the root)
06:54:13 <Swyrl> Yeah that part I don't have an intiution about.
06:54:17 <Swyrl> *intuition
06:54:29 <Swyrl> So what would the FSM for `aaab ::= c` be, do you think?
06:55:11 <ais523> it would have 5 states, representing the number of characters of "aaab" that had matched, so I'll call them 0…4 respectively
06:55:50 <ais523> most characters go to state 0 when read; 'a' in state 0-2 adds 1 to the state number; 'b' in state 3 shows a match; 'a' in state 3 remains in state 3; and 4 is the state that shows a match was found
06:56:20 <ais523> normally it's harder for a human to calculate than that (although still pretty easy for a computer), but this string is quite well-behaved
06:57:12 <ais523> OK, so 'b' in state 3 goes to state 4 and that's how it reports the match; state 4 is just a halt/success state
06:58:09 <ais523> in general I think you have a state for all possible prefixes of the search strings, so the behaviour when there is a potential match is identical to that of a trie
06:59:00 <ais523> the only difference is in deciding where to go after a mismatch, which is done by looking at the longest (possibly empty) suffix of the characters read so far that's a prefix of at least one search string, and going to that state
06:59:38 <Swyrl> Hm, I'm gonna try to draw that out.
06:59:40 <Swyrl> Hard to visualize.
07:00:01 <Swyrl> You seek to the next character on successful match given a transition, yeah?
07:00:21 <ais523> e.g. if we have search strings "abcde" and "cdfg", then upon reading "f" after "abcd", the longest suffix of the characters read that's a prefix of a search string is "cdf", so we jump to the "cdf" section of the trie
07:00:57 <ais523> so, there's one state for every string that's a prefix of at least one search string (even if it's a prefix of more than one search string, you still only have the one state)
07:02:23 <ais523> say the state is "X" and you're reading a character 'y' to produce the string "Xy" – you then need to go to the state "Xy", but if there isn't one (because no search string starts with "Xy"), you remove the first character of "X" and try again (effectively advancing the position "you started scanning from" by one character)
07:02:52 <Swyrl> That's kind of interesting..
07:02:53 <ais523> and repeat, eventually (if you just read a character that doesn't appear in any search string) you end up with the empty string, which must be a prefix of every search string
07:03:22 <ais523> and because there are finitely many "X"s and finitely many "y"s, you can just work out the correct state transition for every "Xy" in advance by trying all of them
07:04:03 * Swyrl thinks.
07:08:50 <Swyrl> Wouldn't this require analyzing the initial string -and- the rules?
07:09:36 <ais523> no, just the rules, once you've analyzed it it works on any string (initial or not)
07:09:40 <Swyrl> aaab -> c run on a string "aaaaaaaab" would fit the bill, but what if it was something like "aaaacccccaaaaaaab".
07:10:10 <ais523> so, the rule for state 3 (i.e. after reading "aaa") goes to state 3 upon reading "a", but back to state 0 upon reading "c"
07:10:37 <Swyrl> Right, but you'd need to do have your list of symbols to fail on.
07:10:42 <ais523> because "aaac" isn't a prefix of any search string, and nor is "aac" nor "ac" nor "c"
07:10:47 <Swyrl> *you'd need to have your list of symbols to fail on".
07:10:52 <Swyrl> Bah, fingers.
07:11:02 <ais523> and we can analyze that in advance
07:11:43 <ais523> well, your "any symbol not in any of the rules" transition goes to state 0
07:12:19 <ais523> I guess you do need to know the complete list of characters that could occur (or just design the FSM to always go to state 0 upon seeing an unexpected character)
07:12:38 -!- ais523 has quit (Quit: my connection is playing up, reconnecting in the hope of fixing it).
07:13:20 -!- SGautam has quit (Quit: Connection closed for inactivity).
07:13:25 -!- ais523 has joined.
07:13:36 <Swyrl> So 1 -(a)-> 2 -(a)-> 3 -(a)-> 4 -(b) -> 0, and 1 -(a)-> 2 -(a)-> 3 -(a)-> 4 -(*) -> 4.
07:14:04 <ais523> I think you have the states numbered incorrectly, but the general pattern of the transitions looks right
07:14:23 <Swyrl> I just started from 1 instead of 0.
07:15:01 <Swyrl> Whoops, yeah, that last one should be 0.
07:15:15 <ais523> last one should be 1 I think, with your numbering
07:15:15 <Swyrl> Initial state, see 'a', next state, see 'a', next state, see 'a', next state, see 'b' and go to initial state. See anything else, go to the same state.
07:15:31 <Swyrl> Yeah, a 1.. sorry, it's almost midnight and my meds have worn off.
07:15:48 <ais523> this sort of thing is common among programmers, don't worry about it
07:16:12 <Swyrl> That'd be odd, though. I feel like that'd produce some weird results.
07:16:21 <Swyrl> Like that'd match on `aaaaaacb`.
07:17:21 <ais523> the full machine is (with your numbering) 1: {'a': 2, others: 1}; 2: {'a': 3, others: 1}; 3: {'a': 4, others: 1}; 4: {'a': 4, 'b': accept, others: 1}
07:17:53 <ais523> so on 'aaaaaacb', the machine goes from 1 to 2 to 3 to 4 on the leading 'a's, stays at 4 for the rest of the 'a's, then goes back to 1 on the 'c'
07:18:02 <ais523> and then stays at 1 for the 'b' because state 1 doesn't have a rule for 'b'
07:18:49 <Swyrl> Why wouldn't 'c' trigger a transition to 4 as well, or is that just "if we're repeating the previous transition"?
07:19:10 <Swyrl> Like if the transition that led to 4 is 'c' that'd be 'c'.
07:19:25 <ais523> 'c' doesn't trigger a transition to 4 because we're only supposed to be in state 4 if the preceding three characters were "aaa"
07:19:58 <Swyrl> Right, but locally it's just because the prior transition is 'a', right?
07:20:32 <ais523> I don't understand your reasoning
07:20:44 <ais523> we're looking for "aaab" and no other string
07:21:19 <ais523> so if we see a 'c' we know that the string we're looking for doesn't overlap this point in the string being searched, because "aaab" doesn't contain 'c'
07:21:48 <ais523> and so we can go back to the initial state because we know that any match has to be entirely to the right of the current location, so it's safe to forget everything we've seen
07:22:06 <ais523> thus 'c' always goes back to state 1, regardless of which state we were in beforehnad
07:22:16 <Swyrl> I'm trying to figure out the general rule for generating these.
07:23:16 <Swyrl> What would "abc -> xyz" look like as a state diagram?
07:23:49 <ais523> 1: {'a': 2, others: 1}; 2: {'b': 3, others: 1}; 3: {'c': accept, others: 1}
07:24:03 <ais523> the right hand half of the rule doesn't matter at all for the matching, just for the replacement
07:24:15 <ais523> to make it into a full finite-state transducer, you need to add in the replacements too
07:24:18 <ais523> so it'd look like this:
07:24:27 <Swyrl> Right.
07:25:00 <Swyrl> So let's add more search strings. "abc", "aab", "acab", "baac".
07:25:23 <ais523> 1: {'a': goto 2, others: echo the read character, goto 1}; 2: {'b': goto 3, others: print 'a', echo the read character, goto 1}; 3: {'c': print "xyz", goto 1; others: print 'ab', echo the read character, goto 1}
07:25:50 <ais523> with your new set of search strings, the states are "a", "b", "aa", "ab", "ac", "ba", "aca", "baa"
07:25:52 <ais523> and ""
07:26:04 <ais523> i.e. everything that's a prefix of any of the search strings
07:26:23 <Swyrl> Hmmmm.
07:26:45 <ais523> ("a" is a prefix of three of the search strings, but we nonetheless only have one "a" state)
07:26:53 <Swyrl> And that's not really any different from a trie.
07:27:05 <Swyrl> You'd still have those same states.. ish.
07:27:13 <ais523> right, the transitions for when we read something that could extend a match are identical to the trie
07:27:38 <ais523> the only difference is the transitions that produce something that isn't a valid prefi
07:27:40 <ais523> * prefix
07:28:29 <ais523> e.g. say we read "a" in state "baa" – "baaa" doesn't match, so we would transition to "aaa" if it were a state (but it isn't), and so we instead transition to "aa" which is a state
07:28:54 <ais523> (and when constructing the finite-state transducer, the "b" and "a" that "fall off the start" get echoed)
07:29:16 <Swyrl> ..What the heck is the algorithm to do this?
07:30:17 -!- tromp has joined.
07:30:43 <Swyrl> DFA minimization?
07:30:53 <ais523> a) calculate all prefixes of the state LHSes; b) append all possible symbols to each of those prefixes (you can optimize for symbols that can't occur); c) for each of those strings made by appending prefix+symbol, take the longest suffix of it that's one of the prefixes; the suffix becomes the next state to go to, the bit before the suffix gets echoed
07:31:18 <ais523> except d) if the prefix+symbol happens to be a rule LHS, instead print the RHS and goto the initial state
07:31:28 <ais523> (assuming we're talking about unspecified-evaluation-order Thue)
07:32:30 <Swyrl> For a), that's just "all possible slices of a string starting from the first character and ending with the last"?
07:33:00 <ais523> no, it's prefixes, not slices
07:33:06 <ais523> and it's only prefixes of rule LHSes
07:33:27 <ais523> so, all substrings of rule LHSes that start at the start, but end before the end
07:33:48 <Swyrl> That's a lot.
07:34:01 <Swyrl> Even for a string like "aaab".
07:34:16 <ais523> it's less than or equal to the total number of characters in the rule LHses
07:34:30 <ais523> e.g. a 10-character string has 10 prefixes
07:34:41 <ais523> so, linear in the size of the program
07:34:44 <Swyrl> I'm sorry, I'm not grokking this properly, what exactly do you define as a prefix here?
07:35:04 <ais523> so for "abcde", the prefixes are "", "a", "ab", "abc", "abcd"
07:35:06 <Swyrl> For me, if I have a string abcd, [a, ab, abc, abcd]-
07:35:14 <Swyrl> Yeah.
07:35:36 <Swyrl> All substrings starting from the first character, and ending right before the end.
07:35:41 <ais523> yep
07:35:50 <Swyrl> Which I said as a slice earlier.
07:35:50 -!- SGautam has joined.
07:36:03 <ais523> "slice" to me doesn't necessarily imply starting at the start
07:36:39 <ais523> a prefix is a sort of slice, but not the only sort of slice
07:36:40 <Swyrl> Ah.
07:36:43 <Swyrl> Makes sense.
07:39:30 <ais523> anyway, this conversation has been useful to me because I've been thinking about the use of iterated finite-state transducers as a parsing automaton – it's been interesting to learn that my automaton is exactly the same thing as "left-to-right Thue"
07:40:04 <ais523> (although, the parsing automaton has restrictions on it to ensure it finishes in linear time, whereas Thue doesn't have those restrictions and is Turing-complete)
07:40:23 <ais523> I guess this makes sense because Thue was originally invented as a type of grammar, before it became a programming language
07:42:19 <ais523> anyway, this gives an idea for the next step in optimising the Thue interpreter: suppose you are using the search algorithm of "make the leftmost replacement, then scan the string again and make the leftmost replacement in the new string"
07:42:47 <ais523> you don't have to scan the string again from the start, there is a limit on how far to the left the next match can occur
07:43:13 <Swyrl> I feel like you could use this in tandem with a deque.
07:43:23 <ais523> yes, at this point I think you could
07:43:36 <ais523> the "leftmost replacement" rule is particularly amenable to being optimized
07:43:36 <Swyrl> I've always wanted to upgrade my trie approach to a DFA.
07:43:50 <Swyrl> I managed to get single rule firing down to like.. 50ns.
07:43:57 <Swyrl> Including replacement.
07:44:08 <ais523> are you using an array-based deque? those are normally the fastest deque implementations
07:44:17 <Swyrl> Yep.
07:44:23 <Swyrl> It's just a circular buffer.
07:44:27 <Swyrl> Head and tail pointers.
07:45:04 <ais523> yes, the only hard part is resizing the buffer if the string grows, and even then it's not that hard, just a pain to get right
07:45:26 <ais523> hmm, I guess this is actually the same thing as a gap buffer
07:45:38 <Swyrl> It's not that hard. You do have a max string length, but.
07:45:52 <Swyrl> You can chain buffers together and allocate/expand as needed.
07:46:23 <ais523> I believe the correct technique is, if the circular buffer fills, to create a new circular buffer that's (e.g.) twice as large and then copy the entire contents over
07:46:54 <ais523> this is linear time because the time spent copying the data is amortized over the time spent filling the buffer in the first place
07:55:38 -!- tromp has quit (Quit: My iMac has gone to sleep. ZZZzzz…).
08:02:54 <Swyrl> A better way would be to just chain "pages" together in a ring (previous/next pointers), with each page being a fixed size. Your head/tail pointers are then a combo of index + pointer.
08:03:57 <ais523> it wouldn't surprise me if that's slower – you're paying time every time you move between pages, rather than just when the size grows
08:04:39 <Swyrl> Not sure how.
08:04:52 <ais523> although I guess the extreme way to get this to work would be to use the processor's page table, and just change the relationship between logical memory to move memory around without actually copying it
08:05:00 <Swyrl> You don't copy anything.
08:05:02 <ais523> * relationship between logical and physical memory
08:05:17 <ais523> Swyrl: yes but I think the copy is actually faster
08:05:53 <ais523> otherwise you have to deal with pagebreaks for the rest of the program's runtime, which is linear in the number of times you cross a pagebreak rather than linear in the total amount of memory you use
08:06:17 <Swyrl> If I have a "page" or "chunk" size of 256 elements, and I fill that up, I allocate another "chunk", set the current chunk's "next" pointer to the newly allocated chunk, and then set my "head" or "tail" pointer to that chunk, and reset the index.
08:06:33 <Swyrl> You have a sliding window back and forth. It's not circular.
08:06:48 <ais523> yes, but say you move back to the previous chunk – now you need to follow the pointer between the chunks
08:07:25 <Swyrl> So let's say you "roll" the deque.
08:07:38 <Swyrl> Meaning dequeue from tail, enqueue at head, or vice versa.
08:08:16 <Swyrl> Unless you're frequently flipping back and forth on chunk boundaries, you'll pay the cost infrequently if your chunks are sufficiently large.
08:08:30 -!- Sgeo has quit (Read error: Connection reset by peer).
08:09:06 <ais523> I feel like there are some situations in which frequently flipping back and forth might happen
08:09:20 -!- tromp has joined.
08:09:32 <ais523> in the random/average case, this'll happen at a rate inversely proportional to the size of the chunks
08:09:45 <ais523> hmm, maybe the compromise is to make each chunk twice as large as the previous
08:09:51 <Swyrl> Could!
08:10:00 <ais523> oh, I just understood why copying is faster from a computational complexity point of view
08:10:16 <ais523> the length of time it takes to *allocate* all the chunks is O(total used memory)
08:10:25 <ais523> and the length of time it takes to do the copies is also O(total used memory)
08:10:54 <ais523> so, the time spent doing the copy can be counted against the time taken doing the allocation
08:11:48 <Swyrl> I'm not sure that that holds. Allocating the double-sized queue and then doing the copy every time is more operations than just allocating.
08:12:05 <Swyrl> You have allocate + copy instead of just allocate.
08:12:57 <ais523> it's more operations but they're proportional to the same value
08:15:25 <Swyrl> So let's say you have a chunk setup like [T.....]-[.....]-[...H.]
08:15:37 <Swyrl> And you move the head to the right.
08:16:00 <Swyrl> You'd just allocate another chunk, and move the head to the right. [T.....]-[.....]-[....]-[H.....]
08:16:25 <Swyrl> I don't see how that's the same operationally. You're doing allocations that are smaller than the string you've stored at that point.
08:16:54 <ais523> well, say you have 65536-byte chunks, now you have to follow a chunk pointer on (on average) 1 in 65536 operations
08:16:56 <Swyrl> So if reserving memory takes linear time with respect to how much you wanna reserve, it's both less than the entire stored string -and- less than double the capacity of the buffer.
08:17:20 <ais523> you are making the "expand" operation cheaper but the "move head/tail" operation more expensive
08:17:43 <ais523> and the latter happens a lot more often – you have to move the head/tail 65536 times to even make an expand happen
08:18:26 <ais523> the vast majority of programs will move quadratically or even exponentially more than they expand
08:18:42 <ais523> so, a tiny extra cost in the move will eventually end up outweighing a linear cost in the expansion
08:20:01 <ais523> (incidentally, most modern operating systems don't have an "allocate as a copy of" primitive, but they should – it would be faster than an allocation and a copy because the allocation has to spend time zeroing the memory in order to stop programs looking at other programs' deallocated memory)
08:20:22 <ais523> (and the point is that zeroing memory takes a comparable length of time to copying memory)
08:22:12 <Swyrl> So, the trade-off is useful because it prevents you from keeping the memory you've allocated. If you expand by twice the buffer size every time you fill it, but never release the memory or downsize, that's wasting space.
08:22:58 <Swyrl> The only valid chunks, i.e the ones that need to be maintained, are the ones between the head and tail pointer.
08:23:06 <Swyrl> Tail can never be after head, head can never be before tail.
08:25:26 <ais523> oh, the normal technique is to use a circular buffer
08:25:34 <Swyrl> Right.
08:25:49 <Swyrl> But in this scheme, the circular nature is an illusion.
08:25:55 <ais523> so when you have one big chunk you can have the "edges" contain useful data, and the head and tail closer to the middle, with the bit between them being uninitialized
08:26:29 <ais523> that means that if the head and tail are constantly moving, but the length of the deque isn't growing, you can avoid needing to do memory operations at all
08:26:51 <ais523> anyway, with respect to "never release/downsize the memory", that's what most programs do already
08:27:06 <ais523> freeing memory back to the OS is a) slow and b) usually impossible due to fragmentation
08:27:12 * Swyrl shrugs.
08:27:53 <ais523> historically in most standard lirbaries, a call to free() will make the memory available to future calls to malloc() but otherwise does nothing to affect the system state, e.g. it doesn't return the memory for use by other programs
08:28:17 <ais523> (although glibc is an exception if you make sufficiently large allocations)
08:29:18 <ais523> deallocating memory is actually the most difficult part of writing a memory allocator, if you never have to deal with deallocation it is trivially easy
08:29:48 <ais523> anyway, for a program like a Thue interpreter, I'm pretty sure it is more efficient to never deallocate
08:29:53 <Swyrl> There was a neat talk about Zig about this. And yes, I agree. Fixed object allocators are usually the best.
08:30:17 <Swyrl> Your fragment size is 1.
08:30:54 <ais523> yes, if all the mallocs and frees in the same program are for blocks of the same size it's also easy
08:31:10 <ais523> (although, only in terms of reusing memory in the program – it's still hard to release it back to the OS)
08:40:13 <esolangs> [[COSOL]] M https://esolangs.org/w/index.php?diff=123664&oldid=109310 * SirBrahms * (-1)
09:13:57 -!- tromp has quit (Quit: My iMac has gone to sleep. ZZZzzz…).
09:15:06 <esolangs> [[Adj]] M https://esolangs.org/w/index.php?diff=123665&oldid=123663 * None1 * (+8)
09:18:27 <esolangs> [[Adj]] M https://esolangs.org/w/index.php?diff=123666&oldid=123665 * None1 * (+23)
09:19:23 <esolangs> [[A+B Problem]] https://esolangs.org/w/index.php?diff=123667&oldid=123614 * None1 * (+68) /* 1 */ Added Adj implementation
09:26:34 <esolangs> [[Adj]] M https://esolangs.org/w/index.php?diff=123668&oldid=123666 * None1 * (-2)
09:27:43 <b_jonas> "<Swyrl> I'm trying to figure out the general rule for generating these." => the Cormen, Leiserson, Rivest, Stein, "Introduction to Algorithms" book chapter 32.3 describes it for now, until TAOCP volume 5 is ready.
09:28:10 <b_jonas> it's rather weird and I don't recall the details
09:28:27 <Swyrl> Wonder if there's an equivalent algorithm or parsing strategy for structured programming.
09:29:15 <Swyrl> Instead of state machines. I know there's a direct conversion by virtue of a state variable and if/else but I have a feeling there's something you can do there that gets at or close to the same complexity as the generated FST.
09:29:36 <Swyrl> I'll have to think on that. And thanks, b_jonas, ais523.
09:29:52 <b_jonas> Swyrl: also maybe look at the ICFP contest 2007, since it has the Fuun DNA language which is kind of like Thue, and it may have been worth for some teams to make an optimized interpreter for it
09:30:45 <Swyrl> Ooh.
09:31:25 <Swyrl> Self-modifying.
09:31:27 * Swyrl peeps.
09:36:42 <b_jonas> or rather, Fuun DNA is similar to Slashalash except it's more powerful and so less of a tarpit
09:40:15 <b_jonas> ais523 “most modern operating systems don't have an ‘allocate as a copy of’ primitive”: I think Linux has that, at least if you're copying OS-page-aligned data, but it's a rather obscure thing that doesn't come up often because normally a program can just resuse previously allocated memory to avoid the zeroing
09:41:12 <b_jonas> besides the zeroing is not much of an overhead if you're overwriting immediately anyway, beacuse it only has to write the farther levels of the cache hierarchy once
09:45:02 <b_jonas> or at least, that used to be the case: when you allocated a large amount of memory, the kernel didn't need to zero and give it to you all at once, it could zero each page when you first touch it. this is probably no longer exactly true because of automatic hugepages.
09:45:08 <ais523> b_jonas: hmm, which system call is that? there's mremap but that doesn't copy the data, it moves it and/or hardlinks it (of course, moving would be good enough for the purpose of this program)
09:45:44 <ais523> and yes, the zeroing is lazy nowadays but that doesn't really change much in terms of the time spent, it still has to be spent at some point
09:46:32 <ais523> Windows at least used to have a background memory zeroing thread, but Linux rejected the idea because of cache thrashing (although, they started reconsidering it once uncached write instructions became generally available, not sure if it was implemetned then)
09:50:50 <int-e> ais523: Maybe a combination of memfd_create and mmap with MAP_PRIVATE will do it?
09:52:02 <int-e> (it's interesting that this is hard... COW pages exist for fork() after all)
09:55:25 <ais523> int-e: I think that might work if you knew before you started populating the memory that you were going to need to copy it
09:55:35 <ais523> but yes, I agree that it's interesting that it's hard
09:56:43 <ais523> hmm, actually, if you MAP_PRIVATE the same file twice, do you get two different copies? I suppose you do?
10:01:15 <int-e> I'd hope so, and I don't know.
10:02:33 <int-e> If you ever want to make a copy of a copy this approach will break down.
10:09:32 <int-e> Regarding MAP_PRIVATE... POSIX is not clear about that ("modifications to the mapped data by the calling process shall be visible only to the calling process and shall not change the underlying object") and neither is the Linux manpage. Hmm.
10:11:49 <b_jonas> ais523: I'm not sure if it's available in full generality, but yes, there's MAP_PRIVATE to map a private writable copy of a file, and vmsplice lets you make the kernel copy data though I don't know the details.
10:11:49 <int-e> https://stackoverflow.com/questions/16965505/allocating-copy-on-write-memory-within-a-process "Then use libsigsegv to intercept write attempts, manually make a copy of the page and then mprotect both to read-write."
10:13:03 <shachaf> Sounds awful.
10:13:40 <ais523> int-e: nowadays you can use userfaultfd for that, which is slightly more efficient – but both solutions still have the problem that you allocate zeroed memory and then copy into it
10:14:33 <ais523> as for MAP_PRIVATE, I'm pretty sure it's possible to map /dev/zero and I'd hope you'd get zeroed memory both times, rather than the second time contain the same data as the first
10:16:38 <b_jonas> I wouldn't worry about the zeroing. If it's really that important, eventually the CPU developer folks will change the cache architecture so that the cache can zero a page on its own. But I don't think it will matter in the case you want, because if you overwrite the page immediately then the copy doesn't cost much.
10:17:20 <ais523> it does for large pages I think
10:17:38 <ais523> if it's just a 4KiB page then yes, the whole thing is zeroed and is now in L1, then you can copy it and the source of the copy also fits into L1 at the same time
10:17:54 <int-e> ais523: but it's also conceivable that /dev/zero is a special case and "real" files behave differently... but not terribly likely, I think.
10:17:58 <ais523> so you just have to pay the cost of writing 4KiB into L1 twice
10:18:23 <ais523> but with larger pages, the start of the page will fall out of cache before the end is zeroed, so you're going at a much slower speed
10:18:50 <ais523> I wonder if POSIX was intentionally vague because they didn't want to constrain OSes, they do that sort of thing a lot
10:19:10 <int-e> (for one I bet if private mappings were shared within a process this would be an attack vector, maybe on on the dynamic linker, that I'd have seen somewhere)
10:21:14 <ais523> they logically shouldn't be, and I think they probably aren't, but it's weird that that isn't clearly documented
10:24:47 <ais523> hmm, another way to implement it would be to have something similar to /dev/mem, but for logical/virtual memory (from the mapping process's point of view) rather than physical memory
10:24:54 <ais523> then, you could just MAP_PRIVATE a section of your own memory
10:25:46 <ais523> (although that would have the usual MAP_PRIVATE issues of needing prefaulting in order to ensure that you are taking a snapshot, rather than some parts of the memory being loaded later and seeing changes that had been made since the map)
10:28:58 <int-e> Anyway, a quick test with a real file indicates separate COW mappings. Same for memfd_create().
10:30:21 <ais523> hmm, I wonder how you submit patches for the man pages
10:37:10 -!- tromp has joined.
10:50:20 <esolangs> [[Chicken]] M https://esolangs.org/w/index.php?diff=123669&oldid=117332 * None1 * (+29) /* External resources */
10:52:20 <esolangs> [[Chicken]] M https://esolangs.org/w/index.php?diff=123670&oldid=123669 * None1 * (+170) /* Instructions */
11:09:07 -!- wib_jonas has joined.
11:14:39 <wib_jonas> ais523: there's a suggestive note in the OpenBSD manpage https://man.openbsd.org/mmap on this, search for /MAP_COPY/
11:25:39 <ais523> wib_jonas: MAP_COPY is a GNU Hurd flag for mmap, I think
11:26:07 <ais523> Linux didn't implement it in favour of MAP_DENYWRITE, but then MAP_DENYWRITE got removed because it was inherently a DOS vector
11:26:23 <ais523> …but didn't add anything else to solve the same underlying problem
11:26:50 <wib_jonas> ais523: in any case, if you want to map a copy of your process's existing memory, and you can go linux-specific, then you can write that memory into a file in /dev/shm , and then map MAP_SHARED that part of the file. this way the kernel does the copying and presumably doesn't have to zero pages, except where you're writing not page aligned
11:27:14 <ais523> wib_jonas: that doesn't map a copy, it double-maps the same part of memory
11:27:21 <ais523> overwriting either will overwrite the other
11:27:43 <wib_jonas> write copies from your memory
11:27:50 <ais523> ah right, I see
11:28:24 <ais523> now I am wondering if you could do something with vmsplice, probably not though because you can't map a pipe
11:29:06 <wib_jonas> but you have to be careful with it, because the kernel has to keep track of what's mapped where. I presumably you'd just create a large file in /dev/shm in advance, map it non-readable, then write to the file and mprotect the region to make it available
11:29:15 <ais523> btw, this reminds me of one of my great ideas: a kernel implementation of pipes mostly in userspace, along the lines of a futex
11:29:52 <ais523> so that the two programs connected by the pipe just directly read and write into memory that's mapped by both of them
11:30:00 <wib_jonas> ais523: I think you can use splice or vmsplice if you want the kernel to copy the same pages to multiple copies in memory. you could plausibly do that for some esoteric stuff.
11:30:22 <ais523> wib_jonas: there's a system call "tee" which copies pages out of a pipe whilst leaving the originals in the pipe
11:30:34 <wib_jonas> ais523: right
11:31:23 <ais523> although it's splice-style copying of references to the pages, rather than actual copies in memory
11:32:41 <wib_jonas> ais523: well, there are multiple libraries that try to implement data structures over interprocess shared memory. some of them probably support something like a pipe. I find the whole thing confusing because there are like four different kernel interfaces for shared memory just on Linux, and I think at least one more on win32, and then the
11:32:42 <wib_jonas> libraries abstract some of these away, but I don't understand how we ended up with so many different interfaces on linux in first place
11:33:31 <ais523> wib_jonas: if the kernel were involved, though, it could allow regular reads and writes to the userspace pipes from processes that didn't know how they worked
11:34:01 <ais523> and I think the reason Linux has so many interfaces is that they keep inventing better ones, but have to keep the old ones around for compatibility
11:34:18 <wib_jonas> ais523: yes, but IIUC tee lets you copy reference only between pipes, and then splice lets you write from those pipes into any file, hopefully with as few copy operations as the kernel can get away with
11:35:08 <ais523> right
11:36:35 <wib_jonas> yes, but in particular in the case of shared memory I don't really understand what is improved between the SysV shared memory and the POSIX shared memory interfaces. I understand the Linux-specific underlying interface which is that shared memory is just files in /dev/shm which is a tmpfs, and I think POSIX  shared memory is a libc interface over
11:36:35 <wib_jonas> that, but why is there a POSIX shared memory interface in first place instead of standardizing SysV shared memory?
11:37:41 <wib_jonas> no wait, I think POSIX does standardize the SysV shared memory
11:37:48 <wib_jonas> but also has the new interface
11:39:02 <wib_jonas> https://man7.org/linux/man-pages/man7/sysvipc.7.html is the SysV shared memory interface, while https://man7.org/linux/man-pages/man7/shm_overview.7.html is the POSIX shared memory interface
11:39:30 <wib_jonas> the latter says "POSIX shared memory provides a simpler, and better designed interface"
11:39:42 <ais523> POSIX isn't just for Unix-alikes, I think many of the new POSIX interfaces are to make it work on, e.g., Windows
11:39:55 <ais523> like, posix_spawn exists because not all OSes can do a fork
11:40:40 -!- __monty__ has joined.
11:40:59 <ais523> oh right, isn't there an inherent resource leak in SysV shared memory?
11:41:02 <wib_jonas> I'm not convinced.  I think posix_spawn exists to preserve the vfork optimization without exposing it to the user, it should exist even without windows
11:41:08 <ais523> like, if a process crashes the OS has no idea when to deallocate it
11:41:22 <ais523> or am I thinking about something else
11:42:02 <wib_jonas> as in, even if you can fork, that requires the OS to copy a lot of administrative data structures, vfork is cheaper but very messy, so it's worth to abstract it away to a library
11:42:37 <ais523> vfork's behaviour is kind-of unclear in between OSes, and it has odd restrictions that aren't always documented well
11:42:40 <wib_jonas> vfork is so messy that you can't use it at all without some low-level knowledge of what the compiler even compiles as a write into memory
11:42:42 <ais523> so abstracting that to a library makes sense
11:43:00 <ais523> interestingly, gcc seems to interpret vfork as equivalent to setjmp
11:43:25 <ais523> at least for the purpose of warning about incorrect uses
11:43:42 <ais523> (that said, I think gcc has some false-positive setjmp warnings, or at least did last time I tried)
11:43:58 <wib_jonas> that makes sense, it's a function that can return twice
11:44:34 <wib_jonas> a C function; for a prolog function that's normal
11:44:41 <wib_jonas> same for a scheme function
11:45:06 <wib_jonas> or an unlambda function while we're there
11:45:39 <ais523> I guess this creates philosophical problems about what returning is
11:45:55 <ais523> in prolog, the compiler is basically rewinding execution to back when the function was running – although setjmp is pretty similar in spirit to that
11:47:15 <wib_jonas> and setjmp is hard to use correctly too, which is why http://software.schmorp.de/pkg/libcoro.html abstracts away one particular use case that you can implement with setjmp but where you don't want to expose the weird thing about returning twice
11:47:26 <ais523> call/cc is also similar to a rewind in spirit
11:47:58 <ais523> I had a call/cc-like operator planned for an Underload-like language, which basically just took a copy of the program-to-run and made it into a stack element
11:48:37 <ais523> it's hard to think of that as returning twice, although in a way it does – the remainder of the program is similar in nature to a call stack and you are copying it, so running past the same point on the stack twice (or on the stack and its copy) can be thought of as returning twice
11:50:16 <wib_jonas> scheme and ruby expose this in a maximal way, because they allow full call/cc but also conveniently usable mutable local variables and mutable arrays. whereas prolog makes mutable state somewhat hard to use, and unlambda doesn't have mutable variables at all.
11:50:41 <wib_jonas> well, ruby 1.8; I don't know if ruby 2 still supports call/cc
11:51:15 <wib_jonas> it was already an esoteric addition to ruby 1, I think the source code has a comment saying that they aren't doing this because it's useful, they're doing it because they can
11:52:28 <ais523> at least SWI-Prolog has a mutable assignment operator that outright overwites things that you would expect to be immutable
11:52:33 <ais523> but I forget the details
11:53:38 <ais523> nb_setarg(+Arg, +Term, +Value) Assigns the Arg-th argument of the compound term Term with the given Value as setarg/3, but on backtracking the assignment is not reversed.
11:53:50 <ais523> there we go
11:57:32 -!- tromp has quit (Read error: Connection reset by peer).
11:57:56 <wib_jonas> yeah, there are multiple brands of mutability; but in scheme, the default way to create a local variable (lambda or let) is just mutable (or potentially mutable; a compiler could detect whether you ever use it as a mutable), and the default way to heap-allocate an array of object references (make-vector) makes the array elements mutable. the same
11:57:57 <wib_jonas> is true for ruby or C.
11:58:45 <wib_jonas> whereas prolog is designed with storage immutable by default, sort of like Haskell
11:59:46 <wib_jonas> hmm, now I wonder, is there an unlmabda extension that lets you mutate the bindings of a k or s?
12:00:31 <ais523> I don't think so, but it would fit what seems to be the spirit of the language (which is to have something that seems at first neat and mathematically elegant, but then to add on features that are surprisingly hard to reason about and break the abstractions)
12:01:32 <ais523> also, I am not sure I would describe Prolog's storage as immutable, it's more "narrowable"
12:01:43 <ais523> in that variables start out undefined and your various operations on them make them more defined
12:03:01 <int-e> <3 lazy evaluation
12:03:36 <int-e> (it's not quite the same, since Prolog is happy to spit out such a "partial" result with a placeholder variable where the hole is)
12:03:45 <int-e> (or holes)
12:04:19 <wib_jonas> ais523: right, assign-once storage. Rust has now added specific support for assign-once cells in its standard library.
12:04:36 <ais523> it's more like assign-gradually storage
12:04:48 <ais523> which is something that lots of programming languages feel like they could benefit from, but is hard to formalize
12:05:55 <wib_jonas> but isn't that assign gradually just made of data structures that contain multiple assign-once cells in it?
12:06:26 <wib_jonas> it probably isn't if you look at the implementation, but that's invisible
12:06:41 <ais523> in very primitive Prologs, possibly – but in practial usage you have lots of assignments that work more like "X #> 4"
12:06:49 <ais523> which narrow the range of something
12:07:18 <wib_jonas> ah you're talking about the finite domain constraint satisfaction extension
12:07:26 <ais523> likewise, it's common to end up asserting that a list has at least X elements, which you can implement with assign-once cells if you use the car/cdr approach to list implementation, but which isn't how you normally think when programming it
12:08:01 <ais523> I think the #> syntax is pretty common for all constraint satisfaction extensions, whether finite domain or not
12:09:07 <wib_jonas> ok
12:09:49 <wib_jonas> my point of view is probably skewed because I used prolog as a somewhat inconvenient language that I translate ordinary code into
12:10:15 <wib_jonas> with extra features that just get in my way
12:11:02 <ais523> I mostly use prologs (and Brachylog in particular) in programming competitions (typically code golf), where constraints are often very useful
12:11:22 <wib_jonas> I have used the constraint solving for a puzzle once
12:11:44 <wib_jonas> to verify that the solution to a certain tiling puzzle is unique
12:13:13 <wib_jonas> plus I've met https://www.perlmonks.com/?node_id=940355 where someone else used prolog to solve a puzzle that I posed on the forum
12:15:22 <esolangs> [[Chicken]] M https://esolangs.org/w/index.php?diff=123671&oldid=123670 * None1 * (+91) /* Instructions */ According to armichaud's explanation, char command pushes HTML escaped characters instead of real characters
12:16:34 <esolangs> [[Chicken]] https://esolangs.org/w/index.php?diff=123672&oldid=123671 * None1 * (+105) /* External resources */
12:33:20 -!- SGautam has quit (Quit: Connection closed for inactivity).
12:37:18 -!- wib_jonas has quit (Quit: Client closed).
12:38:06 -!- tromp has joined.
13:00:42 <esolangs> [[Putlines]] N https://esolangs.org/w/index.php?oldid=123673 * Yourusername * (+594) Created page with "Putlines is a esolang that uses line selectors like "Run lines N to N" to loop = Commands = == Normal == === Print === print "hi" print "this is how" print "to do this" === Comments === #this counts as a line btw #ok === Labels === loop: print "this is used
13:02:36 <esolangs> [[Putlines]] https://esolangs.org/w/index.php?diff=123674&oldid=123673 * Yourusername * (+28) /* Swap lines N and N */
13:03:42 <esolangs> [[FatFinger]] https://esolangs.org/w/index.php?diff=123675&oldid=123517 * Rottytooth * (+272) /* Examples */
13:04:30 <esolangs> [[FatFinger]] M https://esolangs.org/w/index.php?diff=123676&oldid=123675 * Rottytooth * (-26) /* FizzBuzz */
13:27:49 -!- tromp has quit (Read error: Connection reset by peer).
13:32:41 -!- ais523 has quit (Ping timeout: 256 seconds).
13:49:02 -!- ais523 has joined.
14:30:26 -!- slavfox has quit (Quit: ZNC 1.8.2 - https://znc.in).
14:31:47 -!- slavfox has joined.
14:46:09 -!- ais523 has quit (Quit: quit).
14:47:23 -!- SGautam has joined.
14:48:14 -!- GregorR has quit (Quit: Ping timeout (120 seconds)).
14:48:28 -!- GregorR has joined.
15:01:30 <esolangs> [[FatFinger]] https://esolangs.org/w/index.php?diff=123677&oldid=123676 * Rottytooth * (+997) /* Examples */
15:14:18 <esolangs> [[FatFinger]] https://esolangs.org/w/index.php?diff=123678&oldid=123677 * Rottytooth * (+819) Filled out overview
15:24:59 <esolangs> [[FatFinger]] https://esolangs.org/w/index.php?diff=123679&oldid=123678 * Rottytooth * (+1221) Added Fat Dactyls
15:37:52 <esolangs> [[User:Rottytooth]] https://esolangs.org/w/index.php?diff=123680&oldid=123457 * Rottytooth * (-23)
15:43:50 <esolangs> [[Tare]] https://esolangs.org/w/index.php?diff=123681&oldid=123639 * Ais523 * (+39) /* See also */ another language based on similar principles
16:02:44 <esolangs> [[Olympus]] N https://esolangs.org/w/index.php?oldid=123682 * Rottytooth * (+8078) Adding Olympus. Still needs additional detail
16:05:07 <esolangs> [[Olympus]] M https://esolangs.org/w/index.php?diff=123683&oldid=123682 * Rottytooth * (+20)
16:28:04 <esolangs> [[Captive]] https://esolangs.org/w/index.php?diff=123684&oldid=123435 * Rottytooth * (-4) Moved name to box
16:31:19 <esolangs> [[Special:Log/newusers]] create * TheguywholetthedoGSout * New user account
16:31:57 -!- amby has joined.
16:33:27 <esolangs> [[Olympus]] M https://esolangs.org/w/index.php?diff=123685&oldid=123683 * Rottytooth * (+48) /* Gods */ Formatting
16:34:48 <esolangs> [[FatFinger]] M https://esolangs.org/w/index.php?diff=123686&oldid=123679 * Rottytooth * (+20)
16:38:15 <esolangs> [[Olympus]] M https://esolangs.org/w/index.php?diff=123687&oldid=123685 * Rottytooth * (+0) /* Building A Program */
16:47:58 <esolangs> [[Language list]] https://esolangs.org/w/index.php?diff=123688&oldid=123630 * Rottytooth * (+114) Added missing langs
16:50:09 <esolangs> [[Velato]] https://esolangs.org/w/index.php?diff=123689&oldid=108925 * Rottytooth * (+292) Added ref box
16:50:20 <esolangs> [[Velato]] https://esolangs.org/w/index.php?diff=123690&oldid=123689 * Rottytooth * (-20)
16:51:03 <esolangs> [[Velato]] M https://esolangs.org/w/index.php?diff=123691&oldid=123690 * Rottytooth * (-1)
16:51:44 <esolangs> [[Velato]] M https://esolangs.org/w/index.php?diff=123692&oldid=123691 * Rottytooth * (+8)
17:22:00 -!- tromp has joined.
17:24:46 <esolangs> [[N10]] N https://esolangs.org/w/index.php?oldid=123693 * AnotherUser05 * (+263) Created page with "{{wrongtitle|title=n10}} '''n10''' is an esolang invented by [[User:AnotherUser05]]. ==Syntax== Every command is one character followed by a numerical, conditional, or math input. o - Print the ASCII value. n - Print the value. i - Returns the user's input."
17:33:38 <esolangs> [[Graphene]] https://esolangs.org/w/index.php?diff=123694&oldid=123540 * Baltdev * (+2)
17:42:01 -!- __monty__ has quit (Quit: leaving).
17:54:06 <esolangs> [[Graphene]] https://esolangs.org/w/index.php?diff=123695&oldid=123694 * Baltdev * (+361) /* Structure */
18:16:08 -!- tromp has quit (Quit: My iMac has gone to sleep. ZZZzzz…).
18:16:20 <esolangs> [[Graphene]] https://esolangs.org/w/index.php?diff=123696&oldid=123695 * Baltdev * (+97) /* Layout */
18:16:27 <esolangs> [[Graphene]] https://esolangs.org/w/index.php?diff=123697&oldid=123696 * Baltdev * (+1) /* Layout */
18:24:28 <esolangs> [[Graphene]] https://esolangs.org/w/index.php?diff=123698&oldid=123697 * Baltdev * (+90) /* Layout */
18:44:03 -!- SGautam has quit (Quit: Connection closed for inactivity).
19:21:21 <esolangs> [[Light Pattern]] https://esolangs.org/w/index.php?diff=123699&oldid=41102 * Rottytooth * (+282) Added info box
19:21:37 <esolangs> [[Light Pattern]] M https://esolangs.org/w/index.php?diff=123700&oldid=123699 * Rottytooth * (+1)
19:22:05 <esolangs> [[Light Pattern]] M https://esolangs.org/w/index.php?diff=123701&oldid=123700 * Rottytooth * (+8)
19:26:29 <esolangs> [[Folders]] https://esolangs.org/w/index.php?diff=123702&oldid=123509 * Rottytooth * (+27) added info box, shortened summary
19:38:46 -!- esolangs has joined.
19:38:46 -!- ChanServ has set channel mode: +v esolangs.
19:43:09 <esolangs> [[Time Out]] https://esolangs.org/w/index.php?diff=123703&oldid=74978 * Rottytooth * (+524) Added info box
19:46:05 <esolangs> [[Entropy]] https://esolangs.org/w/index.php?diff=123704&oldid=123418 * Rottytooth * (+291) added info box
20:15:40 -!- tromp has quit (Quit: My iMac has gone to sleep. ZZZzzz…).
20:21:01 -!- tromp has joined.
20:22:01 <esolangs> [[Captive]] https://esolangs.org/w/index.php?diff=123705&oldid=123684 * Rottytooth * (+104) /* Hi */
20:26:38 -!- Thelie has joined.
20:27:08 <esolangs> [[Entropy]] M https://esolangs.org/w/index.php?diff=123706&oldid=123704 * Rottytooth * (+39)
21:19:11 -!- tromp has quit (Quit: My iMac has gone to sleep. ZZZzzz…).
21:23:48 -!- tromp has joined.
21:49:06 <esolangs> [[Mode Spam]] https://esolangs.org/w/index.php?diff=123707&oldid=123618 * EvyLah * (+55) file extension .modespam
21:49:20 <esolangs> [[Mode Spam]] M https://esolangs.org/w/index.php?diff=123708&oldid=123707 * EvyLah * (+0)
21:53:21 <esolangs> [[N10]] M https://esolangs.org/w/index.php?diff=123709&oldid=123693 * PythonshellDebugwindow * (+78) Lowercase, stub, categories
21:53:28 <esolangs> [[N10]] M https://esolangs.org/w/index.php?diff=123710&oldid=123709 * PythonshellDebugwindow * (+1) o
22:16:28 <esolangs> [[FatFinger]] M https://esolangs.org/w/index.php?diff=123711&oldid=123686 * Rottytooth * (+4) /* FizzBuzz */
22:17:21 -!- Koen_ has joined.
22:17:39 <esolangs> [[FatFinger]] M https://esolangs.org/w/index.php?diff=123712&oldid=123711 * Rottytooth * (+23) /* 99 Bottles */
22:17:49 <esolangs> [[Mode Spam]] https://esolangs.org/w/index.php?diff=123713&oldid=123708 * EvyLah * (+220) added some stuff to mode 5 to not make it look useless
22:18:09 <esolangs> [[Mode Spam]] https://esolangs.org/w/index.php?diff=123714&oldid=123713 * EvyLah * (+2) I forgot to include the number
22:26:15 <esolangs> [[Stasis]] https://esolangs.org/w/index.php?diff=123715&oldid=107735 * Rottytooth * (+445) added info box
22:35:52 <esolangs> [[User:Rottytooth]] https://esolangs.org/w/index.php?diff=123716&oldid=123680 * Rottytooth * (+27)
22:39:21 <esolangs> [[Sound]] https://esolangs.org/w/index.php?diff=123717&oldid=123651 * Rottytooth * (+25) Added to See Also
22:40:47 <esolangs> [[N10]] https://esolangs.org/w/index.php?diff=123718&oldid=123710 * AnotherUser05 * (+588)
22:41:10 <esolangs> [[N10]] https://esolangs.org/w/index.php?diff=123719&oldid=123718 * AnotherUser05 * (-1)
22:41:28 <esolangs> [[User:Rottytooth]] https://esolangs.org/w/index.php?diff=123720&oldid=123716 * Rottytooth * (+136)
22:44:04 <esolangs> [[BrainTravel]] https://esolangs.org/w/index.php?diff=123721&oldid=122436 * AnotherUser05 * (+42)
22:44:36 <esolangs> [[Ice box]] https://esolangs.org/w/index.php?diff=123722&oldid=123318 * AnotherUser05 * (+0)
22:45:43 <esolangs> [[JustWords]] https://esolangs.org/w/index.php?diff=123723&oldid=123204 * AnotherUser05 * (+32)
22:47:53 <esolangs> [[Laser Pointer]] https://esolangs.org/w/index.php?diff=123724&oldid=123352 * AnotherUser05 * (+55)
22:49:14 <esolangs> [[Sword]] https://esolangs.org/w/index.php?diff=123725&oldid=123258 * AnotherUser05 * (+45)
22:50:13 <esolangs> [[User:AnotherUser05]] https://esolangs.org/w/index.php?diff=123726&oldid=123388 * AnotherUser05 * (+10)
22:52:18 <esolangs> [[N10]] https://esolangs.org/w/index.php?diff=123727&oldid=123719 * AnotherUser05 * (+95) /* Syntax */
22:54:18 <esolangs> [[JustWords]] https://esolangs.org/w/index.php?diff=123728&oldid=123723 * AnotherUser05 * (+0)
22:54:33 <esolangs> [[Olympus]] https://esolangs.org/w/index.php?diff=123729&oldid=123687 * Rottytooth * (+507) /* Concept */
22:56:17 <esolangs> [[User:Rottytooth]] https://esolangs.org/w/index.php?diff=123730&oldid=123720 * Rottytooth * (+15)
22:56:52 <esolangs> [[User:Rottytooth]] https://esolangs.org/w/index.php?diff=123731&oldid=123730 * Rottytooth * (+11)
22:57:28 <esolangs> [[User:Rottytooth]] https://esolangs.org/w/index.php?diff=123732&oldid=123731 * Rottytooth * (+1)
22:57:52 -!- simcop2387 has quit (Quit: ZNC 1.8.2+deb3.1 - https://znc.in).
22:57:52 -!- perlbot has quit (Quit: ZNC 1.8.2+deb3.1 - https://znc.in).
22:59:46 <esolangs> [[User:Rottytooth]] https://esolangs.org/w/index.php?diff=123733&oldid=123732 * Rottytooth * (+31)
23:02:58 <esolangs> [[User:Rottytooth]] https://esolangs.org/w/index.php?diff=123734&oldid=123733 * Rottytooth * (+12)
23:13:17 <esolangs> [[Cree]] N https://esolangs.org/w/index.php?oldid=123735 * Rottytooth * (+2415) Added Cree#, a work in progress by Jon Corbett
23:17:55 -!- Sgeo has joined.
23:18:43 <esolangs> [[Language list]] https://esolangs.org/w/index.php?diff=123736&oldid=123688 * Rottytooth * (+17)
23:18:59 -!- simcop2387 has joined.
23:20:21 -!- perlbot has joined.
23:24:29 <esolangs> [[User:Rottytooth]] https://esolangs.org/w/index.php?diff=123737&oldid=123734 * Rottytooth * (+12)
23:26:23 <esolangs> [[Cree]] https://esolangs.org/w/index.php?diff=123738&oldid=123735 * Rottytooth * (+9)
23:27:52 <esolangs> [[Cree]] https://esolangs.org/w/index.php?diff=123739&oldid=123738 * Rottytooth * (+51)
23:30:24 <esolangs> [[Cree]] https://esolangs.org/w/index.php?diff=123740&oldid=123739 * Rottytooth * (+39)
←2024-02-19 2024-02-20 2024-02-21→ ↑2024 ↑all