00:00:02 -!- FortyTwoBB has quit (Client Quit).
00:02:33 <ais523> gah, FortyTwo was here and gone before I noticed
00:02:41 -!- FortyTwoBB has joined.
00:02:49 <ais523> https://esolangs.org/wiki/Flooding_Waterfall_Model
00:03:12 <ais523> you can't meaningfully store data in the quantity of tokens, but you can store it in the amount of damage marked on them
00:03:49 <ais523> there's a summary on the wiki page, and I wrote a compiler into Flooding Waterfall Model (and an interpreter for the language) to make sure it worked
00:03:55 <FortyTwoBB> oh? but the damage is done the same very tick?
00:04:05 <ais523> it depends on when the first token was made
00:04:12 <ais523> because when the oldest token dies, they all die
00:04:36 <ais523> so, if you have say 1000 tokens, the tokens die 1000 ticks after the first token was made
00:04:49 <ais523> and by changing the timing of the first token you can store information, even though the number of tokens is fixed
00:06:05 <FortyTwoBB> ok? but you can only create tokens when others die, so being a few ticks earlier or later is not easy to control right?
00:06:20 <ais523> it does need controlling, but it wasn't too bad to control it
00:06:28 <ais523> there are two basic ideas
00:06:53 <ais523> one is that we split the creature types into two groups, and alternate between the two groups: most of the time, either all the tokens belong to one group or all the tokens belong to the other
00:07:23 <ais523> (so each creature type spends a lot of time with no creatures at all, other than the one used for halting)
00:08:05 <ais523> if we have, say, one Ape makes two Beasts, and one Beast makes three Cats, then the fact that a group empties means that the number of tokens of any given type is always known – it's just a linear equation, so we have a recurrence relation
00:08:56 <ais523> call one change from one group to the other a "cycle"
00:09:00 <FortyTwoBB> yeah a cycle of linked types making each other
00:09:23 <FortyTwoBB> because the apes need to be made from something
00:09:25 <ais523> then, the number of tokens of any given type can be made to be some constant b, to the power of the number of cycles, plus a number that follows a repeating pattern from one cycle to the next
00:10:36 <ais523> the way you do that is to have some "baseline" types which follow a very simple pattern (e.g. you always have one more Ouphe than Homarid), and use those to build up more complicated patterns
00:10:57 <ais523> and for the baseline types, the timing of the tokens doesn't matter, it's set up so that they always finish changing over after the other types do
00:11:24 <ais523> now, for the non-baseline types, they are created either from baseline types or from each other, and we can make those follow a known pattern in quantity
00:11:59 <ais523> because we know how much are created from each other (the quantity is always known) and can use baseline types to adjust the quantity (e.g. if we want to increase the amount by one, we add one more Ouphe creating them and one less Homarid)
00:12:12 <ais523> so, our program can make the quantities change in a pattern
00:12:24 <ais523> and the quantity determines the time period between the first token being created and the tokens dying
00:13:00 <ais523> so, we can effectively do arithmetic on these time moments, death time = creation time + X where we choose the value of X
00:13:28 <ais523> but, if two different non-baseline creature types each create the same creature type, then only the first one counts for the creation time
00:13:51 <ais523> which makes it possible to do conditionals, because it in effect gives a minimum operator
00:14:56 <FortyTwoBB> because they would get swept into the cycle
00:15:21 <ais523> the details are in the comments here: http://nethack4.org/esolangs/waterfall-to-flooding-waterfall.pl
00:15:44 <FortyTwoBB> yeah i have that open and am swapping back and forth
00:17:19 <ais523> the one complexity is that there are a few cases where a creature needs to create another of the same group – that produces a sort of "sharp edge" transition that's used to handle control flow in the program being implemented (I implemented regular Waterfall Model, so this is control flow and the zeroing triggers)
00:17:54 <ais523> but, there's never a loop within a group, and each cycle uses exponentially more tokens than the cycle before, so the groups still alternate as intended
00:18:25 <ais523> I'm glad my message got through to the thread, anyway – I don't have a Twitch account and couldn't find a reliable way to send the message
00:18:54 <FortyTwoBB> yeah its silly the channels of communication
00:18:56 <ais523> Reddit has become surprisingly broken since it imploded, there were technical issues sending it (and, I gather from the thread, receiving it)
00:20:41 <FortyTwoBB> so basically there's cycles like A->B->C->D->A and X->Y->Z->X with one like Z->C link?
00:21:32 <ais523> no, it's more like all of A,B,C create all of X,Y,Z with like one A->B link
00:21:51 <ais523> the trick is to deal with the "token creation time" and "token quantity" parts of it separately
00:22:07 <ais523> because once the first token has been created, you can add more and it doesn't change the amount of damage marked on the first token
00:22:27 <ais523> as long as you create all the tokens before the first one dies, the timing of the later ones doesn't matter
00:24:07 <ais523> in more detail, the types are in two groups A and B, each of which is divided into baseline and non-baseline types
00:24:17 <FortyTwoBB> well adding more tokens effectively is the same as making fewer tokens later
00:24:26 <ais523> baseline A types create baseline B types and vice versa, the timing doesn't hugely matter and the quantity follows a very simple pattern
00:24:31 <FortyTwoBB> except for the magnitude of the resulting flood
00:24:57 <ais523> the flood magnitudes are controlled to follow a simple pattern
00:25:18 <ais523> the baseline loop basically works to "top up" each token type to the flood magnitude required
00:25:55 <ais523> and then there's a separate loop, where non-baseline A types create non-baseline B types and vice versa, and baseline A also creates non-baseline B and baseline B also creates non-baseline A in order to get the quantities right (but doesn't affect the timing)
00:26:51 <ais523> btw, I used the term "velocity" in the proof to talk about the size of a flood, because it's the distance between the position of the flood of one type and the flood of the types it initially creates
00:27:01 <FortyTwoBB> so like for one clock always being 1 higher than the other, that i can see being done with initial conditions, and having a copied column in the program
00:27:43 <ais523> yep, you get into that position using initial conditions, and then ensure it always remains true throughout the program
00:27:55 <FortyTwoBB> except for a little 2x2 identity matrix at their intersection so the larger one can actually trigger?
00:28:30 <ais523> the actual baseline setup used in the proof is to have one "negative baseline", one "neutral baseline", and n "positive baselines" where n is the number of counters in the emulated program
00:28:47 <ais523> negative baseline has quantity 1 below neutral; one of the positive baseline counters has quantity 1 above neutral, the others are equal
00:29:21 <ais523> and the positives are connected in a sort of twisted loop, so that which positive baseline counter it is that has the higher quantity changes every large cycle (from one group to the other and back)
00:30:40 <ais523> once you have that, you can make the quantities of a counter follow any pattern you like (with repeat length n) via varying how many tokens are created by which of the baselines
00:31:08 <ais523> e.g. say you want the quantities to multiply by 1000 every cycle, and the zeroing triggers from non-baseline counters add up to 5 floods
00:31:32 <ais523> you need 95 from the baselines, and to control the exact quantity, you can choose how many of those 95 are from negative, how many from neutral, how many from each of the positives
00:32:03 <ais523> and the new velocity wlil end up as 1000 times the old one plus a constant, and you can choose the constant
00:32:27 <ais523> and make it vary in any pattern with a repeat length of n
00:34:32 <FortyTwoBB> This is one of those times where having a vastly more inefficient computation method will actually make the resulting function grow much much faster lol.
00:34:45 <ais523> if you want to try out the compiler, it can be run online at tio.run/#perl (you can copy-and-paste the compiler into the "program" box and write the program to compile in the "input" box)
00:35:05 <ais523> FortyTwoBB: well, it's just one exponential slower than the original Waterfall Model, and uses a lot more counters
00:35:44 <ais523> as such I think the resulting function probably grows more slowly, at least with this construction – you would get a faster-growing function in the original by writing the same program with fewer counters, then using the remaining counters to do an exponentiation
00:36:24 <ais523> it still fits well within creature type limits though
00:36:51 <FortyTwoBB> yeah but we also get to have another card in the deck
00:37:08 <FortyTwoBB> because we dont need dralnu's crusade anymore
00:37:44 <ais523> and I get to have another sideboard card in my competitive Turing-complete deck
00:38:13 <ais523> trying to make the deck Turing-complete still seems to damage competitive chances somewhat, though
00:38:49 <ais523> the decks which can naturally reach states which let you do anything tend to either a) care about their sideboard a lot or b) have no way to access it, forcing you to dilute the maindeck
00:39:03 -!- craigo has quit (Remote host closed the connection).
00:39:25 -!- craigo has joined.
00:42:03 -!- craigo has quit (Remote host closed the connection).
00:43:06 <ais523> the creature type usage seems to be 6 flooding counters per original counter, plus 8 flooding counters, plus 1 halt type, plus some way to get into the desired starting state (which in this construction requires additional creature types – as an alternative you could start with damage marked on some of the creatures, or toughness reductions on them)
00:43:16 -!- craigo has joined.
00:43:21 <ais523> but there's enough room to fit, say, a Spiral Rise interpreter
00:44:21 <shachaf> I'm trying to remember what architectures have interesting memory ordering quirks.
00:44:38 <ais523> oh, in addition to the link to the wiki, have a link to this conversation: https://logs.esolangs.org/libera-esolangs/2023-10-14.html#ld (would be a good thing to post in the MTG Salvation thread)
00:44:50 <shachaf> The two examples I always think of are: POWER doesn't necessarily have multicopy atomicity; Alpha has the split-cache issue with data dependencies.
00:45:31 <shachaf> But I think I'm forgetting other wacky behaviors (I vaguely remember there was some SPARC-specific thing?).
00:46:31 <ais523> shachaf: I find architectures with more guarantees more difficult because it's hard to remember exactly what is and isn't guaranteed
00:47:23 <ais523> like, why does x86 have the SFENCE instruction? normally the memory ordering guarantees make that a no-op, but its existence implies that there are cases where it does something
00:47:24 <FortyTwoBB> yeah im posting it, and I did try the complier and it made a reasonable looking output
00:47:58 <shachaf> I think SFENCE is relevant for non-temporal store visibility.
00:48:17 <ais523> shachaf: but only if you want to overwrite it with another store? it doesn't give store/load ordering
00:48:22 <ais523> you need MFENCE for that
00:48:44 <shachaf> I don't think it's particularly about overwriting it.
00:49:02 <shachaf> It certainly doesn't give store-load ordering or flush the store buffer or anything like that.
00:49:22 <ais523> FortyTwoBB: by the way, thanks to all of you in the MTG Salvation thread for working on this – the Turing-completeness construction is so much neater and simpler than when I started working on this
00:49:51 <FortyTwoBB> so that makes me hopeful for things to work out. I'll need to check some more but you haven't made a mistake so far and this looks good.
00:49:59 <shachaf> But in the classic example where you do store_nt(a, ...); store_nt(b, ...); store(is_ready_flag, true); , I think you need SFENCE to guarantee that if the flag store is visible, so are the a and b stores.
00:50:14 <shachaf> (Whereas for regular stores on x86 that behavior is guaranteed, of course.)
00:50:26 <ais523> shachaf: ah right, to pair with an LFENCE on some other processor
00:50:56 <FortyTwoBB> no thank you! i bashed my head against this and though it was game over for it when the damage doubling version was not TC.
00:51:05 <shachaf> I guess you only ned the LFENCE if you have a non-temporal load, too?
00:51:23 <ais523> regular reads can be reordered by the processor
00:51:38 <ais523> if you read memory location X, then read memory location Y, then Y can be given an older value than the value you just read from X
00:51:40 <shachaf> On x86 all regular loads behave like load-acquire.
00:51:53 <shachaf> And all regular stores behave like store-release.
00:51:58 <ais523> regular stores are release, but regular loads are relaxed
00:52:08 <ais523> so you need to use lfence a lot in multithreaded code
00:52:38 <ais523> FortyTwoBB: I was so worried about making mistakes
00:52:50 <shachaf> I don't think that's true.
00:52:51 <ais523> I wasn't confident this was right until I had the Flooding Waterfall Model interpreter actually running programs
00:53:15 <shachaf> E.g. https://godbolt.org/z/brfjbEKEx
00:54:23 -!- Lord_of_Life_ has joined.
00:54:37 <FortyTwoBB> Yeah if this works it solves our layers problem and shaves a cardslot
00:54:46 -!- Lord_of_Life has quit (Ping timeout: 255 seconds).
00:55:29 <shachaf> Ah, https://stackoverflow.com/a/50780314 says that even the use I mentioned isn't necessary:
00:55:32 <shachaf> "_mm_lfence is almost never useful as an actual load fence. Loads can only be weakly ordered when loading from WC (Write-Combining) memory regions, like video ram. Even movntdqa (_mm_stream_load_si128) is still strongly ordered on normal (WB = write-back) memory, and doesn't do anything to reduce cache pollution."
00:55:42 -!- Lord_of_Life_ has changed nick to Lord_of_Life.
00:56:27 <FortyTwoBB> Well I'll need to take some time to go over this, but it looks very very good.
00:57:07 <ais523> shachaf: I was testing on godbolt too, you seem to be right
00:57:32 <ais523> …this means that I have been using a lot more fences than necessary in my x86 code!
01:00:42 <ais523> now I'm wondering how out-of-order execution even works, if it isn't allowed to reorder the loads – presumably it tries to keep all the values it's working with in exclusive cache so that it knows they haven't been written by other processors, and just remembers whether it's written them itself
01:01:03 -!- FortyTwoBB has quit (Quit: Client closed).
01:07:19 <ais523> hmm, I am dreading the technical issues that will happen the *next* time I try to get in contact with the MTG Busy Beaver time, this time was bad enough…
01:08:41 <shachaf> ais523: My vague understanding is that it's pretty speculative.
01:09:23 <shachaf> So it can maybe reorder some loads speculatively and verify later that it turns out to be OK.
01:09:28 <shachaf> But I don't really know the details at all.
01:09:32 <ais523> oh right, you do a speculative load out of order, then check whether it was correct when you retire
01:09:49 <ais523> and end up with a whole load of timing-based sidechannels that end up leaking kernel internals and causing huge security issues
01:10:37 <ais523> I probably knew that at some point
01:10:58 <shachaf> How does this work, though? Say you have "load A; load B;", and B is in your cache but A isn't. You speculatively do the B load while waiting for A.
01:11:37 <shachaf> How can you tell when you get A whether you need to reload B?
01:12:03 <ais523> don't you just throw away all your speculative effort when retiring the "reload B" instruction?
01:12:17 <esolangs> [[^!]] https://esolangs.org/w/index.php?diff=117886&oldid=117824 * Ninesquared81 * (+432) /* Examples */
01:12:27 <ais523> like, the sequence is dispatch load A → dispatch load B → calculation based on B → calculation based on A → retire load A → retire load B
01:13:17 <ais523> by the time you retire the load of B, a) you know whether B has changed since you dispatched the load, b) nothing that was based on B has actually affected memory yet
01:13:35 <ais523> so one simple solution would be to just flush the pipeline, although I suspect actual processors have some faster way to recover
01:13:56 <ais523> * when retiring the "load B" instruction
01:14:06 <shachaf> Hmm, I'm probably confused.
01:14:23 <shachaf> My concern is that another core did "store A; store B", and e.g. was preempted between the two stores.
01:14:35 <shachaf> It does "store B; store A", oops.
01:14:42 <shachaf> Sorry, I was thinking complete nonsense.
01:15:17 <ais523> yes, the general pattern is store B; sfence; store A on one processor, and load A; lfence; load B on the other (delete fences if you have a processor that doesn't need them)
01:15:56 <shachaf> OK, when you ignore my nonsense this makes sense.
01:16:04 <shachaf> You just need to retire in order.
01:16:27 <ais523> which x86(-64) does, of course
01:16:47 <ais523> although, this conversation has made me realise why retiring load instructions is important
01:20:25 <shachaf> There's nothing like this for stores, right?
01:20:32 <shachaf> You just need to flush the store buffer in FIFO order.
01:22:08 <ais523> stores are confusing because they happen at the retire, so the execution units don't have to do anything special at all – all the complexity is in how the cache works
01:23:48 <shachaf> At retire they just go in the store buffer, not the cache, I assume, right?
01:24:11 <ais523> some quick searching implies that this is the primary reason why mfence is not a no-op on x86
01:24:22 <shachaf> Yes, that's my understanding.
01:24:22 <ais523> because a load from cache can cross a store that's still in the store buffer
01:25:00 <shachaf> Still in another core's store buffer, specifically.
01:25:13 <ais523> yes, processors know which addresses they've stored to
01:25:25 <shachaf> The classic store-load ordering thing -- you do "store-load" and you want the load to only happen after the store is globally visible.
01:25:43 <shachaf> So you need cross-core communication to happen between the store and the load.
01:25:59 <shachaf> Whereas for store-store, load-load, and load-store, you don't need that.
01:29:36 <shachaf> Ah, https://stackoverflow.com/a/62480523 describes this.
01:29:45 <shachaf> Uh, describes the memory order mis-speculation thing.
01:29:54 <shachaf> And it says there's a performance counter for it?
01:31:20 <ais523> there are a huge number of performance counters, not all of which have obvious meanings
01:31:32 <ais523> this is the sort of thing that I'd definitely expect to have a performance counter
01:31:58 <shachaf> But it doesn't say what it's called and I'm trying to find it.
01:32:21 <ais523> is it machine_clears.memory_ordering?
01:32:25 <ais523> or is that something else?
01:32:42 <shachaf> Oh, it's probably https://perfmon-events.intel.com/index.html?pltfrm=icelake.html&evnt=MACHINE_CLEARS.MEMORY_ORDERING
02:14:17 -!- chiselfuse has quit (Ping timeout: 252 seconds).
02:16:14 -!- chiselfuse has joined.
02:16:21 -!- chiselfuse has quit (Remote host closed the connection).
02:21:29 -!- chiselfuse has joined.
03:40:59 <esolangs> [[^!]] https://esolangs.org/w/index.php?diff=117887&oldid=117886 * Ninesquared81 * (+564) /* Examples */
04:01:31 -!- Noisytoot has quit (Ping timeout: 264 seconds).
04:04:21 -!- ais523 has quit (Quit: quit).
04:06:20 -!- Noisytoot has joined.
05:07:06 <esolangs> [[Trampolines]] https://esolangs.org/w/index.php?diff=117888&oldid=117261 * Aadenboy * (+406) fixed velocities for trampolines, and added pipes again
05:22:40 <esolangs> [[Trampolines]] M https://esolangs.org/w/index.php?diff=117889&oldid=117888 * Aadenboy * (+1) stack based not cell based
05:37:38 -!- awewsomegamer has joined.
05:39:13 -!- awewsomegamer has quit (Client Quit).
07:06:18 <esolangs> [[Capsule]] https://esolangs.org/w/index.php?diff=117890&oldid=109094 * Leol22 * (+76)
07:08:50 <esolangs> [[Language list]] M https://esolangs.org/w/index.php?diff=117891&oldid=117757 * Leol22 * (+14)
07:32:38 <esolangs> [[Special:Log/upload]] upload * Aadenboy * uploaded "[[File:Text BABA 0.webp]]": BABA text from Baba is You
08:20:56 <esolangs> [[Baba Is You]] https://esolangs.org/w/index.php?diff=117893&oldid=88244 * Aadenboy * (-6706) completely overhauled the page :P
08:49:58 -!- Europe2048 has joined.
09:07:32 -!- craigo has quit (Quit: Leaving).
09:32:18 -!- Koen has joined.
10:23:01 -!- Sgeo has quit (Read error: Connection reset by peer).
10:26:35 <esolangs> [[Talk:Nice]] https://esolangs.org/w/index.php?diff=117894&oldid=117879 * None1 * (+182) Content on this wiki must be public domain or equivalent
10:28:55 <esolangs> [[Talk:Nice]] https://esolangs.org/w/index.php?diff=117895&oldid=117894 * None1 * (+181)
10:29:23 <esolangs> [[Talk:Nice]] M https://esolangs.org/w/index.php?diff=117896&oldid=117895 * None1 * (+11)
10:29:57 <esolangs> [[User:None1]] M https://esolangs.org/w/index.php?diff=117897&oldid=117861 * None1 * (-4) /* My Articles */
10:30:15 <esolangs> [[User:None1]] M https://esolangs.org/w/index.php?diff=117898&oldid=117897 * None1 * (-15) /* My Articles */
10:32:30 <esolangs> [[User:None1/InDev]] https://esolangs.org/w/index.php?diff=117899&oldid=117853 * None1 * (+298) /* Arithmetic */
10:33:18 <esolangs> [[User:None1/InDev]] M https://esolangs.org/w/index.php?diff=117900&oldid=117899 * None1 * (+6) /* Arithmetic */ periods
10:33:43 <esolangs> [[User:None1/InDev]] M https://esolangs.org/w/index.php?diff=117901&oldid=117900 * None1 * (+10) /* Declaration */
10:54:29 -!- arseniiv has joined.
10:57:05 <esolangs> [[User:None1/InDev]] https://esolangs.org/w/index.php?diff=117902&oldid=117901 * None1 * (+60) /* I/O */
10:59:21 <esolangs> [[User:None1/InDev]] https://esolangs.org/w/index.php?diff=117903&oldid=117902 * None1 * (+229) /* Commands */
11:00:40 <esolangs> [[User:None1/InDev]] M https://esolangs.org/w/index.php?diff=117904&oldid=117903 * None1 * (+130) /* Commands */
11:47:29 <b_jonas> and, I assume, even with all the overhead for the flooded encoding, a Turing-universal machine will fit into the 280 or so creature types?
11:49:15 <b_jonas> "but there's enough room to fit, say, a Spiral Rise interpreter" => ah
11:56:47 <b_jonas> "<ais523> how out-of-order execution even works, if it isn't allowed to reorder the loads – presumably it tries to keep all the values it's working with in exclusive cache" => that's what I would think, since accessing main memory is so slow that at that point out of order execution doesn't help much, except… doesn't hyperthreading share the L1D cache between two threads of execution that alternate
11:56:53 <b_jonas> instructions in a single core? and don't all the processor cores in a processor share their L3 cache usually?
11:58:08 <b_jonas> I never really tried to understand x86's multi-threaded memory model, I figure I don't need to write code that communicates between threads so much that the optimization matters for me.
12:08:58 <b_jonas> I wonder how efficiently that M:tG construction can simulate computation
12:10:03 <int-e> . o O ( Will it ever be tournament viable :P )
12:11:04 <int-e> . o O ( "Don't worry, we're just computing the 5th Fibonacci number; it'll be done in 5 minutes." )
12:11:38 <b_jonas> int-e: I meant the other efficient, as in how slow it is asymptotically to simulate arbitrary programs
12:12:08 <b_jonas> probably just like two to eight levels of exponential
12:21:19 <Europe2048> Tried making a bf interepter in Scratch, but I coudln't get it to work. Not even "+." worked as expected.
12:23:10 <Europe2048> I can't even share it with text due to how many blocks it has.
12:33:01 <int-e> b_jonas: that's less funny :-P
12:33:15 -!- Koen has quit (Remote host closed the connection).
12:36:14 -!- Koen has joined.
12:38:40 <Europe2048> Since you weren't answering, I'll give up by myself...
12:45:28 -!- __monty__ has joined.
12:46:42 -!- Koen has quit (Quit: Leaving...).
13:23:24 -!- ais523 has joined.
13:25:03 <ais523> <b_jonas> I wonder how efficiently that M:tG construction can simulate computation ← it's exponentially slower than The Waterfall Model, which if using the Spiral Rise implementation to fit into the required amount of memory, is exponentially slower than a tag system
13:25:08 <ais523> and tag systems are polynomially slower than Turing Machine
13:25:34 <ais523> so, actually not that bad as these things go
13:26:23 <ais523> (also, there may be a faster programming technique using the same sort of setups with the same cards)
13:32:21 <ais523> one thing I am interested in is esolang interpreters that can optimise out all the levels of exponential explosion in this sort of simulation
13:32:45 <ais523> optimising out one level is normally fairly easy – both ratiofall and floodfall can do it (via different mechanisms)
13:32:51 <ais523> but it would be nice to get the optimisation to recurse
13:38:13 <b_jonas> yes, and for M:tG that doesn't even sound too impossible, by simulating stacks
13:38:34 <b_jonas> it's just hard if you want a somewhat competitive deck
13:42:31 <ais523> right, I am mostly interested in golfing the implementation so that the rest of the deck can be as competitive as possible
13:43:27 <ais523> a deck like Ruby Storm can make hundreds of mana and play every card in its sideboard – but because it can do that, its sideboard is normally full of useful cards that help it win in various different situations that might come up during a game, rather than actual sideboard cards
13:43:37 <b_jonas> what's the newest card that seems to help (as in, you can't just replace it with older cards) these days?
13:43:42 <ais523> so, using up sideboard slots on Turing-completeness cards reduces the win rate even in game 1
13:44:09 <b_jonas> is the hundreds of mana colorless?
13:44:39 <ais523> it's red, you get some amount of other colors too
13:45:19 <ais523> e.g. Ruby Storm commonly runs Inspired Tinkering in the sideboard, which generates treasure tokens as one of its effects
13:45:23 <ais523> `card-by-name Inspired Tinkering
13:45:58 <ais523> but this means that if you want lots of non-red mana you can't cut the Inspired Tinkering from the sideboard, even though it's one of the less important sideboard cards
13:46:52 <ais523> (the deck also runs Manamorphose, but that can't generate too much colored mana without decking you out, unless you have some way to reshuffle cards back into your library or some way to protect yourself against decking)
13:48:47 <b_jonas> the color is probably more required if you want to set up your computer than if you just want to win many games
13:49:37 <ais523> right, once the combo fully goes off you can win with just about any damage spell – the deck has copy effects, so you can win with just one Lightning Bolt if you want to
13:49:39 <b_jonas> or does it not matter because you go off infinitely powerful before you need the color?
13:50:01 <ais523> so most of the sideboard is there for if the deck only partially goes off and needs to try to salvage a fizzled combo
13:50:03 <b_jonas> well, before you need all colors
13:50:22 <ais523> a fizzled combo generally wouldn't have non-red mana available, so the salvaging cards are red
13:50:32 <ais523> things like Empty the Warrens and Galvanic Relay
13:50:51 <b_jonas> and your opponent will probably have a lot of the most annoying disruption
13:50:59 <FireFly> does this construction only require one player for computation, or does it depend on cooperation?
13:51:00 <b_jonas> some of the opponents at least
13:51:14 <ais523> FireFly: it requires an opponent but they don't have to be cooperative
13:51:16 <b_jonas> FireFly: the goal for this construction is to not require cooperation by the opponent
13:51:30 <b_jonas> that's why it's so hard to make it competitive
13:51:51 <ais523> b_jonas: watching Ruby Storm play through disruption is glorious, it basically keeps on trying combos until the opponent runs out of counterspells, and brute-forces its way through prison pieces
13:52:11 <b_jonas> in cooperative you can usually just spend 50 turns to draw most of your deck and then set up a fragile combo
13:52:22 <ais523> I think it can even win through Trinisphere if it's given enough time to sculpt its hand
13:52:49 <ais523> the main cards that stop it are Deafening Silence and Maddening Hex, which can't be burst through simply by making more mana
13:54:34 <ais523> it is possible that some other Storm variant, that is less sideboard-dependent, would be more competitive after you replace 6-7 sideboard cards with a Turing-complete construction though
13:54:35 <b_jonas> I'm still curious, what's the newest card that's useful for the computation setup
13:55:00 <ais523> sorry, I tried to work it out and then got distracted, for most of the cards old cards are acceptable
13:55:07 <HackEso> Arcbond \ 2R \ Instant \ Choose target creature. Whenever that creature is dealt damage this turn, it deals that much damage to each other creature and each player. \ FRF-R
13:55:26 <ais523> there are other ways to repeatedly damage every creature in an unstoppable loop, though
13:55:55 <ais523> but double Arcbond is very efficient – the problem with it is that you need a lifegain source in order to avoid the game ending due to burn damage
13:56:47 <ais523> (also it is hard to ensure that the triggers always stack in the correct order – the current construction uses an Arcbond that was controlled by the turn player as it resolved, with everything else controlled by their opponent)
13:58:00 <b_jonas> everything else controlled by the opponent? does that mean you need Donate?
13:58:16 <b_jonas> or is there some other card that helps with that
13:58:26 <ais523> `card-by-name Fractured Identity
13:58:27 <HackEso> Fractured Identity \ 3WU \ Sorcery \ Exile target nonland permanent. Each player other than its controller creates a token that's a copy of it. \ C17-R
13:58:45 <ais523> so you need to be able to a) make a huge number of tokens and b) give them to your opponent
13:59:14 <ais523> Fractured Identity plays double duty, but it implies you need to add an un-exiler to the deck
13:59:56 <b_jonas> that card is newer than Arcbond
14:00:09 <ais523> it's not part of the computation, it's part of the setup
14:00:15 <ais523> I didn't realise you were counting those too
14:00:22 <b_jonas> I'm not sure what I'm counting
14:00:27 <ais523> but yes, you need an unexiler in order to create multiple tokens
14:00:44 <b_jonas> I know that there's power creep so new cards are likely useful if you want to just not lose
14:00:48 <ais523> `card-by-name Mirror of Fate
14:00:49 <HackEso> Mirror of Fate \ 5 \ Artifact \ {T}, Sacrifice Mirror of Fate: Choose up to seven face-up exiled cards you own. Exile all the cards from your library, then put the chosen cards on top of your library. \ M10-R
14:00:52 <ais523> `card-by-name Riftsweeper
14:00:53 <HackEso> Riftsweeper \ 1G \ Creature -- Elf Shaman \ 2/2 \ When Riftsweeper enters the battlefield, choose target face-up exiled card. Its owner shuffles it into their library. \ FUT-U, MMA-U
14:00:59 <ais523> `card-by-name Coax from the Blind Eternities
14:01:00 <HackEso> Coax from the Blind Eternities \ 2U \ Sorcery \ You may choose an Eldrazi card you own from outside the game or in exile, reveal that card, and put it into your hand. \ EMN-R
14:01:33 <ais523> I think those are the most viable options for unexilers
14:01:46 <ais523> there aren't a whole lot of unexilers printed
14:02:20 <b_jonas> `card-by-name Pull from Eternity
14:02:21 <HackEso> Pull from Eternity \ W \ Instant \ Put target face-up exiled card into its owner's graveyard. \ TSP-U
14:02:31 <b_jonas> I think Pull from Eternity was the first one
14:02:36 <ais523> alternatively, you can run a create-a-token effect and a donate effect separately, but the unexilers are useful for creating infinite (rather than large finite) loops and setting up an arbitrarily large program
14:02:55 <ais523> Pull from Eternity doesn't work because you then have to get the card out of the graveyard
14:03:22 <b_jonas> don't you need something to replicate instants anyway, for other cards?
14:03:41 <ais523> Ruby Storm is based around the card Bonus Round which copies instants
14:03:59 <ais523> that's one of the reasons I wanted to use it as a base – it runs four copies of an instant-copier maindeck
14:04:27 <ais523> so that saves space in finding a way to copy your instants and sorceries
14:04:41 <ais523> `card-by-name Bonus Round
14:04:47 <ais523> too new, I thoughti t would be
14:04:50 <b_jonas> that does mandatory copy, so it just tries to go infinite if you have two of them?
14:05:07 <ais523> it goes exponential rather than infinite
14:05:26 <b_jonas> exponential? isn't it just linear?
14:05:36 <ais523> it's linear in the number of copies of Bonus Round which resolved
14:05:47 <ais523> which is exponential in the number that were cast, because they copy each other
14:10:31 <ais523> building a Turing-complete programming language inside a card game
14:11:09 <ais523> actually there are two games that can manage it, Magic: the Gathering and Netrunner
14:11:31 <ais523> but Magic finds it much easier, we're wondering if it's possible to do it with an opponent trying to stop you (implying that you play a competitive deck)
15:59:27 -!- Guest55 has joined.
16:00:41 -!- Guest55 has quit (Client Quit).
16:27:25 -!- Thelie has joined.
16:35:26 <esolangs> [[Capsule]] M https://esolangs.org/w/index.php?diff=117905&oldid=117890 * PythonshellDebugwindow * (+59) Fix link to userpage, add categories
17:05:05 <esolangs> [[N]] M https://esolangs.org/w/index.php?diff=117906&oldid=44078 * PythonshellDebugwindow * (+13) Deadlink
17:08:36 <esolangs> [[Metat]] M https://esolangs.org/w/index.php?diff=117907&oldid=36168 * PythonshellDebugwindow * (+52) Categories
17:28:55 -!- Thelie has quit (Remote host closed the connection).
18:45:27 -!- __monty__ has quit (Ping timeout: 240 seconds).
18:50:47 -!- __monty__ has joined.
19:11:31 -!- arseniiv has quit (Quit: gone too far).
19:28:58 -!- __monty__ has quit (Ping timeout: 255 seconds).
19:30:48 -!- __monty__ has joined.
19:35:16 -!- __monty__ has quit (Ping timeout: 255 seconds).
19:41:43 -!- Sgeo has joined.
20:37:32 <esolangs> [[Sultan's daughter]] M https://esolangs.org/w/index.php?diff=117908&oldid=91441 * CreeperBomb * (-4) Corrected punctuation, added necessary conjunction
20:39:39 <esolangs> [[Oifi]] M https://esolangs.org/w/index.php?diff=117909&oldid=115346 * CreeperBomb * (-1) /* Conclusion */
20:47:28 <esolangs> [[Neg]] M https://esolangs.org/w/index.php?diff=117910&oldid=70733 * PythonshellDebugwindow * (+53) Categories
20:55:53 <esolangs> [[NUMBRS++]] M https://esolangs.org/w/index.php?diff=117911&oldid=94370 * PythonshellDebugwindow * (+25) Category
21:21:12 -!- Europe2048 has quit (Quit: Client closed).
22:14:49 -!- craigo has joined.
22:17:28 -!- Koen has joined.
22:27:34 -!- craigo_ has joined.
22:32:21 -!- craigo has quit (Ping timeout: 260 seconds).
22:34:18 <esolangs> [[Three Star Programmer]] https://esolangs.org/w/index.php?diff=117912&oldid=74391 * Tux1 * (+105)
23:28:05 -!- Koen has quit (Quit: Leaving...).
23:48:39 -!- Sgeo_ has joined.
23:48:42 -!- Sgeo has quit (Read error: Connection reset by peer).