←2019-09-23 2019-09-24 2019-09-25→ ↑2019 ↑all
00:29:30 -!- MDude has joined.
00:42:11 -!- budonyc has quit (Quit: Leaving).
00:55:47 -!- Lykaina has joined.
02:07:21 <imode> https://pressron.wordpress.com/2016/08/30/what-we-talk-about-when-we-talk-about-computation/
02:08:30 <imode> this post is interesting, because it presents a dichotomy that I've been struggling with.
02:08:40 <imode> that of models of programming vs. models of computation.
02:14:07 -!- lf94 has joined.
02:14:13 <imode> Modal the language is a model of programming, while the underlying machine model is a model of computation.
02:15:10 <imode> the split between the two is pretty clear; a model of programming requires a validator, compiler or other machinery to function properly prior to performing computations, while a model of computation doesn't.
02:16:20 <imode> I don't like models of programming because there's always something to be simplified. some underlying detail that can be exposed, judged and eliminated if found redundant.
02:16:33 <imode> you can _build_ models of programming on top of models of computation.
02:17:20 <imode> so it's a matter of finding just the right match.
02:35:31 -!- Lykaina has quit (Quit: leaving).
03:44:41 -!- imode has quit (Ping timeout: 246 seconds).
03:57:13 -!- sprocklem has joined.
04:06:25 <esowiki> [[1+]] https://esolangs.org/w/index.php?diff=66328&oldid=66320 * TwilightSparkle * (+87)
04:43:17 -!- Lord_of_Life has quit (Ping timeout: 265 seconds).
04:47:02 -!- Lord_of_Life has joined.
04:53:46 -!- rodgort has quit (Quit: Leaving).
05:01:19 -!- rodgort has joined.
05:08:42 -!- atslash has joined.
05:17:01 -!- atslash has quit (Quit: Leaving).
05:19:28 -!- MrBismuth has quit (Ping timeout: 245 seconds).
05:21:06 -!- atslash has joined.
05:25:45 -!- MrBismuth has joined.
05:29:43 <esowiki> [[1+]] https://esolangs.org/w/index.php?diff=66329&oldid=66328 * TwilightSparkle * (+147)
05:43:40 <esowiki> [[1+]] M https://esolangs.org/w/index.php?diff=66330&oldid=66329 * TwilightSparkle * (+102) /* Turing-Completeness */
05:49:44 -!- JonoCode9374 has joined.
05:51:34 -!- JonoCode9374 has quit (Remote host closed the connection).
05:56:07 -!- oerjan has quit (Quit: Nite).
08:03:24 -!- cpressey has joined.
08:08:57 <cpressey> Good morning. Interesting conversation here yesterday, but largely a tangent and I never returned to the thing that started it, which was:
08:09:05 <cpressey> <shachaf> It's interesting that it's a version of Rice's theorem that holds for programs that are guaranteed to halt.
08:09:49 <cpressey> Rice's theorem states that all nontrivial properties of the RE sets are undeciable. There are gobs and gobs of nontrivial properties of recursive functions that are decidable.
08:12:59 <cpressey> Rice's theorem doesn't say anything about sets other than SE sets, and neither, as far as I can tell, does "Rice's theorem for the computable reals".
08:13:07 <cpressey> s/SE/RE/
08:13:37 <cpressey> Nor would I expect "Rice's theorem for the computable reals" to say anything about sets other than the RE sets, because the computable reals are an RE set.
08:43:47 -!- j-bot has quit (Ping timeout: 245 seconds).
08:43:51 -!- HackEso has quit (Remote host closed the connection).
08:43:58 -!- HackEso has joined.
11:08:43 <shachaf> cpressey: No, I certainly wasn't suggesting that it's a direct consequence of Rice's theorem.
11:10:05 <shachaf> Rice's theorem is a connectness theorem, and so is "Rice's theorem for the computable reals".
11:14:45 <shachaf> Actually I'm not sure what you're saying here.
11:15:23 <cpressey> I'm saying "Rice's theorem for the computable reals" is not " a version of Rice's theorem that holds for programs that are guaranteed to halt."
11:16:02 <cpressey> Rice's theorem is a statement about the RE sets. So is "Rice's theorem for the computable reals".
11:17:35 <cpressey> They don't say anything useful when you apply them to a more specific set, like R.
11:18:52 <shachaf> By "a version of", I mean "it makes the same statement about a different thing".
11:19:19 <cpressey> The computable reals are an RE set.
11:19:32 <cpressey> It's not an essentially different thing.
11:19:55 <shachaf> Wait, what do you mean? Of course it is.
11:20:01 <cpressey> Explain how.
11:20:09 <cpressey> No, wait.
11:20:12 <cpressey> This is the tangent.
11:20:26 <cpressey> This has nothing to do with machines that always halt.
11:20:43 <cpressey> You don't have a version of Rice's theorem that says something about machines that always halt.
11:21:03 <shachaf> A version of Rice's theorem that holds for [a special set of] programs that are guaranteed to halt.
11:21:16 <shachaf> Or equivalence classes thereof.
11:21:32 <cpressey> In brief, what does this theorem say?
11:21:40 <shachaf> Which one?
11:21:48 <cpressey> "A version of Rice's theorem that holds for [a special set of] programs that are guaranteed to halt."
11:22:07 <shachaf> I think I was just talking about Rice's theorem for the computable reals at that time.
11:22:18 <cpressey> What does this theorem you refer to, say?
11:22:23 <shachaf> A computable real is a program : Q+ -> Q
11:22:53 <shachaf> Such that two computable reals that converge to the same value are considered equivalent.
11:23:25 <shachaf> "Rice's theorem for the computable reals" says that, given a computable real, no nontrivial property of the real number it represents is decidable.
11:24:00 <cpressey> "The real number it represents" is not a guaranteed-to-halt function.
11:24:20 <shachaf> What? Functions don't halt.
11:24:23 <Taneb> shachaf: that statement doesn't feel quite right
11:24:32 <shachaf> And real numbers aren't functions.
11:24:43 <cpressey> "The real number it represents" is not a guaranteed-to-halt program. <--- better?
11:24:51 <shachaf> No, real numbers aren't programs either.
11:25:09 <cpressey> You literally just said they were.
11:25:13 <cpressey> shachaf | A computable real is a program : Q+ -> Q
11:25:35 <shachaf> OK, that's probably the thing Taneb was objecting to.
11:25:42 <cpressey> I'm on the cusp of concluding you're arguing in bad faith.
11:25:55 <shachaf> Yikes. Let me try to be more precise.
11:26:12 <tromp_> in what sense is a program from Q+ to Q a real?
11:26:40 <Taneb> Given a series of approximations, find a closer approximation?
11:26:45 <shachaf> A computable real is a special kind of real number.
11:27:23 <shachaf> A "program representing a computable real" -- which I should have a shorter name for -- is a program that can approximate it to any precision you ask for.
11:28:10 <tromp_> then what is the Q+ ?
11:28:18 <shachaf> Positive rationals.
11:28:29 <cpressey> We can know a lot more about that program, that we can know about the computable real that it approximates finite approximations of. You seem to be conflating these two objects.
11:28:37 <cpressey> s/that/than/
11:29:11 <shachaf> Yes, the theorem is definitely about properties of the real represented by the program, not the program itself.
11:29:19 <shachaf> This is also the case with the regular Rice's theorem.
11:30:39 <shachaf> If r is a computable real, it has a bunch of programs representing it. Say f is one. For any positive rational error bound e, |f(e) - r| < e
11:30:58 <Taneb> "If P is a function from the computable reals to booleans, then P is either trivial or undecidable", is this a correct statement of the theorem?
11:31:35 <shachaf> Sure.
11:31:43 <shachaf> (Where I guess "undecidable" means "uncomputable".)
11:32:06 <shachaf> Hmm, I should be more careful since I'm trying to be precise.
11:32:10 <cpressey> shachaf: For "This is also the case with the regular Rice's theorem", could i ask you to actually restate "the theorem is definitely about properties of the real represented by the program, not the program itself" in terms of regular Rice's theorem.
11:32:42 -!- j-bot has joined.
11:33:07 <shachaf> Taneb: A function takes an argument which is a computer program. It can decide plenty of those things about those programs. If we insist that it returns the same value for the same computabler real, whatever program we use to represent it, then it's uncomputable.
11:33:26 <tromp_> i wld agree with that
11:33:34 <Taneb> Yes, I agree with that too
11:33:36 <tromp_> we cannot decide if a computable real is positive
11:34:18 <Taneb> We cannot decide programatically in general, certainly
11:34:47 <Taneb> We can fairly easily decide whether 4, which is a computable real, is positive or not
11:36:40 -!- xkapastel has joined.
11:36:43 <shachaf> cpressey: The regular Rice's theorem says the same thing.
11:36:57 <shachaf> cpressey: https://en.wikipedia.org/wiki/Rice%27s_theorem calls it "semantic properties"
11:38:16 <cpressey> shachaf: So what does any of this have to do with always-halting programs?
11:38:41 <shachaf> Only that representatives of computable reals always halt.
11:39:08 <shachaf> I think I phrased it in that particular way as a leadup to something else which maybe I didn't say.
11:39:10 <cpressey> What does Rice's theorem tell you about these representatives?
11:39:20 <Taneb> Absolutely nothing
11:39:27 <cpressey> Taneb: I agree.
11:39:53 <cpressey> Which is why I find "It's interesting that it's a version of Rice's theorem that holds for programs that are guaranteed to halt." so misleading.
11:41:24 <Taneb> The programs representing computable reals must halt by any reasonable definition
11:41:36 <Taneb> (or be productive, but that doesn't really work with TMs)
11:41:38 <shachaf> "it's a version of" doesn't mean "it's a consequence of"
11:42:08 <cpressey> "It's a version of" means it can be stated as a seperate theorem. Can you state it?
11:42:38 <cpressey> I'd love to hear this version of Rice's theorem that says interesting things about programs that are guaranteed to halt.
11:42:43 <Taneb> Rice's theorem for computable reals?
11:42:50 <Taneb> It's been stated multiple times in the past half hour
11:43:06 <cpressey> Rice's theorem for the representatives of computable reals.
11:43:20 <Taneb> No such thing, just like there's no Rice's theorem for Turing machines
11:43:39 <cpressey> Taneb: I suppose it's too much to hope to hear shachaf agree that there's no such thing?
11:44:20 <shachaf> I feel like Taneb is mostly in completely agreement with me here.
11:44:44 <shachaf> At least I feel like I'm in complete agreement with Taneb.
11:44:48 <Taneb> shachaf: certainly in the broad strokes
11:45:03 <shachaf> Taneb: Every last detail, no exceptions!
11:45:31 <cpressey> Okay, well. Is the nature of my objection still unclear?
11:45:43 <shachaf> cpressey: If we call the regular one "Rice's theorem for partial functions", then we can call this one "Rice's theorem for the computable reals".
11:46:13 <shachaf> In which case, sure, "Rice's theorem for Turing machines" and "Rice's theorem for representatives of computable reals" don't exist?
11:46:18 <cpressey> shachaf: I don't care what you call it, I don't care about connectedness, I'm trying to say it has no bearing on always-halting programs.
11:46:33 <shachaf> This seems like a silly distinction to make, because Rice's theorem is about both Turing machines and partial functions.
11:47:12 <shachaf> OK, I really don't understand what you're saying. Representatives of computable reals are a special set of programs that always halt.
11:47:25 <cpressey> Yes, I know.
11:47:55 <shachaf> This theorem says, given one of these programs, what can you decide about the thing it represents?
11:48:02 <shachaf> Or rather it says you can't decide anything.
11:48:45 <shachaf> I feel like you must be reading something more into what I said before but I'm not sure what.
11:49:12 <cpressey> I guess I'm pretty bad at explaining what I'm objecting to.
11:49:19 <shachaf> Representatives of computable reals are programs that always halt, and this theorem has a bearing on them. That seems clear and uninteresting so it's probably not what you mean.
11:50:25 <cpressey> I guess it must not be.
11:51:48 <cpressey> Suppose I have a program, call it P.
11:51:56 <shachaf> There's a separate question that I had the other day, which is: Given a Turing machine that's guaranteed to halt, what properties are decidable about the function it represents?
11:52:08 <cpressey> Suppose P is known to always halt. But suppose that I don't know what P is supposed to do.
11:52:17 <shachaf> But that question had nothing to do with the reals.
11:52:56 <cpressey> And also nothing to do with Rice's theorem.
11:53:29 <cpressey> Lots and lots of nontrivial properties are decidable for a machine that's guaranteed to halt.
11:53:47 <cpressey> "Does it accept '123'?" Well, just run it and see?
11:54:13 <cpressey> I certainly didn't have the impression you thought of this as a "separate question".
11:54:39 <cpressey> <shachaf> It's interesting that it's a version of Rice's theorem that holds for programs that are guaranteed to halt.
11:54:45 <cpressey> <shachaf> What are other things you can say about programs that you know some things about?
11:54:51 <cpressey> <shachaf> For example what properties of a total function are decidable given a guaranteed-to-halt program that computes it?
11:57:15 <shachaf> Yes, those are separate questions.
11:57:46 <shachaf> What I was thinking when I said that was: We know Rice's theorem, which is about Turing machines we don't know anything about. But what about the large universe of possible theorems about what's decidable if we know some things?
11:58:05 <tromp_> properties to that can decided by finite support
11:58:23 <tromp_> (remove "to")
11:58:51 <shachaf> Yes, that sounds right.
12:00:32 <shachaf> `olist 1181
12:00:33 <HackEso> olist 1181: shachaf oerjan Sgeo FireFly boily nortti b_jonas
12:00:37 <cpressey> Okay, my confusion was regarding how closely-related you were proposing those statements to be.
12:02:06 <cpressey> Or questions, rather
12:02:22 <shachaf> This computable real theorem is certainly not about all Turing machines that always halt. And it's not about them representing the same kind of thing, either.
12:02:57 <shachaf> So a few offhands sentences 24 hours ago were written confusingly.
12:03:03 <shachaf> But I don't think that's bad faith.
12:07:47 <cpressey> So you and Taneb both used the word "productive", and I suspect that's how I think of computable reals -- and definitely not as being represented by halting approximation functions -- which probably attributes to some of the confusion too.
12:08:12 <cpressey> Oh, you'll object to "halting function" again now won't you
12:08:56 <shachaf> I wasn't going to, but if you're being precise, I don't think a function can halt or not halt (though a partial function can be defined or not defined at a point).
12:09:17 <cpressey> Well, I can't call them "total functions" because the total functional programming people decided that includes things that map things to bottom
12:09:37 <shachaf> What? No they haven't.
12:11:12 <cpressey> That was kind of supposed to be a joke.
12:11:22 <shachaf> Ah.
12:11:34 <shachaf> Anyway, I don't think there's a big distinction between the "productive" representation and the one I used, except Turing machines aren't usually very good at expressing productivity.
12:12:07 <shachaf> I want to add one important distinction, which is that a "productive" machine can't go back and change its mind an unbounded time after it's outputted something.
12:12:52 <cpressey> I was at no point considering an unretractable output tape - just the regular tape of a TM. That's how I learned the definition of computable number, that's all.
12:12:58 <shachaf> I think you permitted that with your decimal expansion representation the other day, and it doesn't work.
12:13:21 <shachaf> If your tape is completely retractable, then you can't get any information at all about the real number in any finite amount of time.
12:13:27 <shachaf> That seems pretty bad.
12:14:16 <cpressey> That's what they call "uncomputable", yes. But at the same time, you don't have to have a strong guarantee about when and where a cell of the tape will never be overwritten.
12:14:36 <cpressey> Just that it will eventually not be overwritten again.
12:14:56 <cpressey> "Revised" might be a better word even.
12:15:50 <shachaf> I want the computable real to give me definite information about the real in finite time. I think that's a pretty reasonable request of a program.
12:15:51 <cpressey> As I said yesterday: this is basically Turing's original formulation; apparently it's subtly wrong?
12:16:14 <cpressey> Finite, but not necessarily known beforehand how much.
12:16:22 <shachaf> Well, it's wrong in the sense that it doesn't agree with the modern definition, and it's not very useful.
12:20:32 <shachaf> It's easy enough to fix it to work: You just insist that you can't rewrite the number "too much".
12:20:59 <shachaf> In particular, when you write the nth bit, you can't change the number you've already written by more than 2^-n.
12:22:45 <cpressey> Yes, I don't disagree with this.
12:22:52 <cpressey> Something like this.
12:23:43 <cpressey> Maybe you could say, Every finite prefix of a computable real is computable in finite time.
12:24:10 <cpressey> That leads into having a halting approximation program.
12:25:33 <shachaf> Well, reals don't have a unique prefix, is what I'm trying to say in the first place.
12:26:39 <cpressey> If you can't eventually nail down the first digit of a computable real, then it's not computable.
12:26:52 <shachaf> What's the first digit of 0.333... + 0.666...?
12:27:30 <shachaf> Oh, "eventually". That doesn't sound very computable to me, if it's not bounded.
12:28:22 <cpressey> But you're okay with "productive".
12:28:36 <cpressey> To me they're not different?
12:29:10 <shachaf> I'm OK with "productive" but not "a productive stream of digits".
12:30:03 <shachaf> Exactly because you can't always nail down even the first digit.
12:31:50 <cpressey> The sum of 0.333... + 0.666... is 0.(3,6)(3,6)(3,6)..., so the first digit to the right of the decimal point is (3,6).
12:32:42 <shachaf> What's that?
12:35:16 <cpressey> You wouldn't like it. It's got something to do with a productive stream of digits.
12:36:17 <cpressey> What is 0.333... if not a productive stream of digits? And you're asking me to *add* these objects which you apparently do not accept.
12:36:44 <shachaf> Yes, I'm bringing up this object to show the problem with the stream of digits representation.
12:36:59 <shachaf> But maybe you're still talking about the version that supports rewrites, in which case it works fine.
12:38:35 <cpressey> I don't have any problem with 0.333... + 0.666... = 0.999..., for what it's worth.
12:39:25 <esowiki> [[1+]] M https://esolangs.org/w/index.php?diff=66331&oldid=66330 * TwilightSparkle * (+2)
12:40:10 <cpressey> Is the fact that 1.000... and 0.9999... represent the same number, an actual problem?
12:40:38 <cpressey> 0.(3,6)(3,6)(3,6)... can represent that number too, as far as I can see?
12:41:49 <cpressey> For 1.111... - 0.111... I might nail down the first few digits as 1.000... but regardless of my choice of which to nail down, it's the same number?
12:43:11 <shachaf> I don't understand your notation.
12:43:39 <shachaf> The problem is that if the trillionth digit is a 7, you might have to go back and change the 0. to a 1..
12:58:10 <cpressey> Sure, so maybe you can't nail it down until you've passed the trillionth digit. But there is some point at which you can nail it down. That's what I mean by "eventually". If there is no such point, then it's not computable.
12:59:28 <shachaf> But you don't know what that point is.
12:59:37 <shachaf> That's what I want out of "computable".
12:59:49 <shachaf> (And also the standard definition.)
13:00:16 <shachaf> (I should be more careful with saying things like that parenthetical.)
13:00:38 <cpressey> But if it's a computable real, you know that the point exists, at some finite time after you start. Thus: Every finite prefix of a computable real is computable in finite time.
13:01:40 <shachaf> Is 0.9 a prefix of 1.001?
13:01:49 <shachaf> Oh, I misread.
13:10:38 <cpressey> Anyway that's why I don't have a problem with writing digits to a TM's tape. You get the 1st one in finite time, you get the 2nd one in finite time, and so forth. Maybe this isn't the best way, maybe it's what you and/or Taneb mean by "TMs are bad at that", but I don't see how it's invalid.
13:11:08 <Taneb> You can't report when you have a digit
13:15:52 -!- arseniiv has joined.
13:16:15 <cpressey> I'm fine with that, and also I'm fine with, if you do want such a report, you can use some other representation.
13:40:58 <arseniiv> imode: re. models of computation vs. models of programming: an interesting thing though I’m not sure this is how it is?
13:51:56 <lf94> arseniiv: why do you think that?
13:52:19 <lf94> I think there is a clear distinction, and if we were to program in assembly, our model of programming directly maps to the model of computation
13:54:01 <arseniiv> lf94: I haven’t read to the end yet
13:59:05 <shachaf> cpressey: This is the distinction between "Cauchy sequences" (normally used for the reals) and "quickly-converging Cauchy sequences" (normally used for the computable reals).
13:59:54 <shachaf> I think by "TMs are bad at that" Taneb just meant that there's no standard mechanism for TMs to produce intermediate output. But you can make one up easily enough.
14:00:13 <shachaf> Anyway quickly-converging Cauchy sequences have a guaranteed rate of convergence, which makes them useful for computation.
14:02:45 <arseniiv> re. that post: hm clearly Minsky machines compute (partial) functions N^m → N^n. These functions of course aren’t that sophisticated but… I don’t see a clear border
14:06:02 <arseniiv> still that distinction has some merit
14:12:36 <arseniiv> > All those representations pass Bauer’s first condition — they are directly useful for simulating computations — but they differ widely with respect to the second test, namely the complexity of deciding the language of the representation. => aah
14:12:37 <lambdabot> <hint>:1:38: error: lexical error at character 's'
14:12:52 <arseniiv> oops I did it again
14:24:45 -!- Sgeo_ has joined.
14:28:07 -!- Sgeo has quit (Ping timeout: 265 seconds).
14:29:28 -!- sprocklem has quit (Ping timeout: 245 seconds).
14:33:47 -!- arseniiv_ has joined.
14:36:02 -!- arseniiv has quit (Ping timeout: 240 seconds).
14:56:24 -!- xkapastel has quit (Quit: Connection closed for inactivity).
15:17:09 -!- imode has joined.
15:31:35 <imode> arseniiv_: re: that classification, yeah it's incredibly useful. it allows us to actually classify different formal systems.
15:31:56 <imode> in a more productive way than just feeling around.
15:33:32 <lf94> imode: could you modify modal so I can pipe content to it
15:33:45 <lf94> I would like to write my modal code in vim, and then pipe this content to modal
15:34:04 <lf94> I guess I can just call it...
15:34:11 <imode> lf94: sure, I can do that.
15:34:25 <lf94> Also I don't think the clear command works
15:34:38 <lf94> IMO clear command is very important actually.
15:34:50 <lf94> You can dynamically re-arrange statements to your benefit
15:34:57 <lf94> (As in, clear command should be a fundamental importance)
15:35:10 <imode> the clear command works. it just resets everything to defaults.
15:35:24 <imode> line 407.
15:35:24 <lf94> I tried it the other day ... maybe it was on your telnet interpreter...
15:35:31 <lf94> ok :)
15:36:01 <imode> I'll add a command switch to just evaluate a file.
15:38:18 <imode> or just give you an alternative interpreter.
15:38:40 <lf94> you should add a switch for interpreter mode
15:38:46 <lf94> otherwise, pipe mode
15:38:47 <lf94> :D
15:39:07 <imode> bear in mind, this language/interpreter is not being worked on.
15:42:00 <lf94> it doesn't need to be :)
15:42:30 <lf94> This is very exploratory for me
15:45:05 <imode> https://hatebin.com/wljbtaonwn here you are.
15:45:22 <imode> just run it the same as normal. it'll evaluate your file and exit.
15:45:33 <imode> along with printing out the time taken.
15:45:58 <imode> to test, just run it with a file containing 1 + 2
15:45:59 <imode> or something.
15:50:55 <imode> modal is a bad language because it's unpredictable and masks the underlying model of computation while relying heavily on it for its reduction semantics.
15:54:43 <imode> lf94: what I was working on _after_ modal is a combinatory interpreter using some of the same evaluation tactics as modal, i.e using a queue.
15:55:15 <imode> combinatory logic is confusing even to me, though. not because the underlying model is confusing, but after a certain point, things just look like a mess.
15:55:28 <imode> that was about the time I swapped to more machine models.
15:57:23 -!- cpressey has quit (Quit: A la prochaine.).
16:03:26 <lf94> "it's unpredictable"
16:03:45 <imode> eh, that was a misspeaking. its evaluation method is unconventional.
16:03:49 <lf94> I think it can be made predictable
16:04:01 <lf94> using certain patterns or maybe modifying the interpreter behavior a tad
16:05:36 <imode> a better method imho is to keep the _concept_ of term rewriting around, but look at the base model of computation a little more closely.
16:06:19 <imode> this interpreter is juggling two associative arrays, one of which is totally cleared every time a reduction takes place and is only needed for the internal mechanics of reduction. it is also juggling a queue that holds the entire expression.
16:07:14 <imode> dropping the reduction semantics just gives you a queue automaton that behaves a lot like a traditional Forth if you add the right operators other than dup, swap, drop, namely 'last'.
16:08:49 <imode> the next question is.. is this the right level of abstraction. we've traded something reasonably elegant for a machine model, with instructions executing sequentially, etc.
16:09:49 <imode> and better yet, is this the right machine model.
16:12:19 <lf94> I was thinking maybe assembly with a more modern macro system would be the best.
16:13:06 <lf94> performant and maintainable code
16:13:25 <lf94> and you can write a simple interpreter if you keep to a subset of a popular ISA
16:13:53 <lf94> and then you can write in this assembly language forever
16:14:22 <lf94> and then you can build more powerful tools on top of this
16:18:12 <imode> yeah. if you telnet into imode.tech on port 1024, you can see something I've been working on.
16:18:43 <imode> it's a forth-like concatenative language based around queue automata.
16:43:00 -!- Lord_of_Life_ has joined.
16:45:44 -!- Lord_of_Life has quit (Ping timeout: 268 seconds).
16:45:52 -!- Lord_of_Life_ has changed nick to Lord_of_Life.
16:55:05 -!- FreeFull has joined.
17:01:57 -!- Phantom_Hoover has joined.
17:05:11 -!- arseniiv_ has changed nick to arseniiv.
18:03:39 -!- LKoen has joined.
18:34:57 <lf94> imode: read that article you sent me yesterday. very interestign.
18:35:28 <lf94> Only further makes me think, why are we not working from machine models
18:36:34 <lf94> I guess it really _does_ prevent us from higher order thinking
18:37:08 <lf94> like a register move: ld a, b is /similar/ to a = b; but it's also vastly different
18:37:24 <lf94> a = b, a and b are very abstract, whereas ld a, b; these are concrete
18:39:31 -!- sprocklem has joined.
18:41:16 <lf94> I think this is why stack based languages are appealing
18:45:58 <lf94> (the stack provides a way to have arbitrary variables)
18:54:46 <lf94> I'm seeing that you can write a forth interpreter/compiler in such a way that it can re-arrange the postfix to become prefix
18:54:49 <lf94> + 1 2 vs 1 2 +
19:03:28 <arseniiv> yeah one needs to read RTL instead of LTR and reverse arguments of all ops
19:04:30 <arseniiv> though we can read LTR and use a stack of incomplete applications
19:11:36 <lf94> I'm just saying we can have a forth where we read LTR
19:35:05 -!- andrewtheircer_ has joined.
19:35:41 <andrewtheircer_> hi
19:37:16 <arseniiv> lf94: of course
19:37:34 <arseniiv> andrewtheircer_: hi
19:37:40 <andrewtheircer_> sup
19:37:46 <andrewtheircer_> new eso projects?
19:51:59 <imode> lf94: a more pure form imo of forth-like langs is ones where a queue is used. because of the universal nature of queue automata (they're turing complete), you can theoretically pack everything from main memory to a call stack onto a single work area.
19:52:14 <lf94> yes
19:53:22 <imode> whether it's a suitable machine model is something I'm investigating.
19:54:33 <lf94> why wouldnt it be
19:54:51 <imode> I'm a sucker for minimalism.
19:55:34 <imode> as far as I can tell, the only queue manipulation primitives required for a model like mine are duplicate, swap, drop and an operator I'm calling 'last'.
19:55:54 <imode> this plus control flow structures and comparisons.
19:56:47 -!- sprocklem has quit (Ping timeout: 265 seconds).
19:56:51 <imode> 'last' seems to be a dirty hack as well. it takes something from the end of the queue and pushes it to the start.
19:57:03 <lf94> imode call it peek
19:57:09 <lf94> oh
19:57:17 <imode> 3 2 1 last -> 1 3 2
19:57:21 <lf94> imode: if your queue is circular...
19:57:26 <lf94> call it rotate
19:57:36 <lf94> or shift
19:57:39 <imode> it's less of a naming thing and more of an issue with semantics.
19:58:09 <lf94> that's just like a bitwise shift though
19:58:16 <lf94> that would be a right-shift
19:58:32 <lf94> left-shift would go the other way
19:58:36 <lf94> 2 1 3
19:58:54 <imode> 'last' can be implemented in a traditional queue automaton pretty easily: all you have to do is roll through the queue, keeping track of the last logical symbol you encountered.
19:58:55 <lf94> I think shifting should be a primitive and is definitely not a hack
19:59:18 <imode> it takes linear time and uses the state of the automata's FSM to hold that last logical symbol.
19:59:32 <andrewtheircer_> what's this about
19:59:37 <lf94> why not use a linked list
20:00:00 <imode> why use one, the interface is the same.
20:10:56 <arseniiv> BTW is there a page with computational automata hierarchy? It could be useful for someone
20:11:40 <arseniiv> I’d call imode’s one a half-deque automaton
20:12:50 <andrewtheircer_> automata
20:13:03 <arseniiv> we have pop/push at the front and pop from the back, though one can factor dup, drop, swap and last through different set of operations
20:13:25 <arseniiv> andrewtheircer_: plural of “automaton” :)
20:13:35 <andrewtheircer_> i was saying
20:13:38 <andrewtheircer_> automata is the topic
20:13:39 <andrewtheircer_> nice
20:13:43 <andrewtheircer_> i hope we can get to CAs
20:13:54 <andrewtheircer_> 'cos i have made a metaphor out of them
20:14:41 <arseniiv> there was a discussion of CAs a day earlier
20:15:09 <arseniiv> though there wasn’t much and it wasn’t the topic itself
20:19:32 <imode> arseniiv: I'd call it a queue automaton with a built-in.
20:20:01 <imode> it's easy to construct a machine that performs 'last'. it just takes N*M time.
20:48:37 -!- arseniiv_ has joined.
20:51:37 -!- arseniiv has quit (Ping timeout: 250 seconds).
20:54:20 -!- arseniiv_ has changed nick to arseniiv.
20:59:20 -!- andrewtheircer_ has changed nick to andrewthediscord.
20:59:25 -!- andrewthediscord has changed nick to andrewtheircer.
21:10:12 -!- andrewtheircer has quit (Remote host closed the connection).
21:25:09 -!- Sgeo__ has joined.
21:28:47 -!- Sgeo_ has quit (Ping timeout: 268 seconds).
21:32:12 -!- Lykaina has joined.
21:35:26 <arseniiv> imode: in your pythonic C compiler, I saw both s.d and s->d, though you probably saw that already
21:35:43 <arseniiv> compiler-to-C, I mean
21:36:23 <arseniiv> ah, never mind
21:36:42 <imode> yup, the stack is a local.
21:36:58 <arseniiv> I missed that there are *s in functions where s->... etc.
21:37:00 <arseniiv> my bad
21:37:26 <arseniiv> `:` is hard to type
21:37:26 <HackEso> ​/srv/hackeso-code/multibot_cmds/lib/limits: line 5: exec: :`: not found
21:37:37 <arseniiv> eh
21:38:04 <arseniiv> today I’m dancing on bots’ legs
21:39:33 <arseniiv> and `?` and `;` are too even if we take `[…]` as primitive instead of `[` and `]` separately
21:40:20 <arseniiv> though it’s obvious it’s me alone who wanted to type this language :D
21:42:39 <arseniiv> I mean, for what I know, there’s no sense to encode the fact that `[` and `]` have to be balanced and `?` and `;` should only occur inside `[…]`, in their types
21:43:29 <arseniiv> I don’t know if it’s even possible, and as a theoretical interest I’d be glad to see how if it is
21:44:35 <arseniiv> and disregarding these syntactic constraints and `:` all of it is of course typable easily
21:45:30 <arseniiv> if conservatively
21:46:38 -!- Sgeo_ has joined.
21:46:40 <imode> something I have been mulling about in the back of my mind is a graph rewriting language.
21:46:43 <arseniiv> e. g. we could allow inside `[…]` a code that grows the stack, but we better disallow that
21:46:57 -!- Lykaina has quit (Quit: leaving).
21:47:11 <arseniiv> imode: not like Eoderdrome or what it’s called?
21:47:26 <arseniiv> ah, https://esolangs.org/wiki/Eodermdrome
21:48:03 <arseniiv> ah′, it has a fixed initial state
21:48:23 <arseniiv> probably not what you think
21:49:10 <imode> nah, more flexible.
21:49:19 <arseniiv> I thought so
21:49:37 -!- Sgeo__ has quit (Ping timeout: 245 seconds).
21:49:54 <arseniiv> imode: also am I right this compiler generates a code for `:` without bound checking?
21:50:05 <imode> you are correct.
21:51:49 <arseniiv> aren’t you afraid somebody will write an OS in this language and make an error using `:`? ::D
21:52:12 <imode> if someone chooses this language to write an OS in, I'll be too shocked to consider the security issues. :P
21:52:34 <arseniiv> hm I personally am afraid of C programs maybe now
21:52:47 * arseniiv segfaulted
21:52:56 <imode> gg.
21:53:20 <imode> the graph rewriting language would rewrite undirected, unlabeled graphs.
21:54:59 <imode> rules are of the form a1 a2, a3 a4, ..., aN aM -> b1 b2, b3 b4, ... bN bM;
21:54:59 <arseniiv> hm it’s occurred to me now someone could do something with hypergraphs too
21:55:04 <fizzie> Sounds like Eodermdrome.
21:55:12 <imode> it isn't.
21:56:00 <imode> you essentially try to match the graph bit by bit on the left hand side, remove all matched items, and then construct a new subgraph on the right hand side.
21:56:21 <arseniiv> how should it connect to the old parts?
21:56:23 <imode> here's an explosion: A B -> A B A
21:56:29 <imode> err.
21:56:36 <imode> A B -> A B, B A
21:57:06 <imode> that'd just be an infinite loop because the graphs are undirected.
21:57:27 <imode> if you do A B -> A B, B C, C A, you'll generate an ever-expanding loop of vertices.
21:57:37 <imode> any variables on the RHS are autogenerated vertices.
21:57:50 <imode> any variables that are unused, rather. sorry, cooking dinner.
21:57:54 <fizzie> Then it's an unlabeled multigraph?
21:58:12 <imode> no, just an unlabeled graph. multiple edges aren't allowed.
21:58:42 <fizzie> Oh, I didn't read the adjustment to "explosion".
21:59:08 <arseniiv> <imode> if you do A B -> A B, B C, C A, you'll generate an ever-expanding loop of vertices. => not a simple loop though. It would look like some foam
21:59:24 <imode> it would, because it'd match nondeterministically.
21:59:57 <imode> it'd be a bunch of loops. small ones to large ones.
22:00:33 <imode> in order to make anything actually useful, you need to build two things: labels and directed edges.
22:00:43 <arseniiv> it almost tempts me to analyze how their lengths would be distributed but I don’t know asymptotic methods
22:01:35 <imode> you can abstract high enough until you can build things like lists and sequential statements. you can construct finite state machines.
22:01:38 <imode> and a tape.
22:01:59 <imode> sky's the limit after that, tbh.
22:02:14 <arseniiv> <imode> and a tape. => it was obvious in hindsight :D
22:02:29 <fizzie> Anyway, it *still* sounds quite Eodermdrome-y, disregarding the input/output aspects. But maybe the devil's in the details then.
22:02:36 <imode> you can build, for example, a lot of RDF stuff.
22:02:53 <imode> fizzie: it's pretty far from eodermdrome because I can construct arbitrary graphs.
22:03:37 <imode> you can reconstruct SPARQL using the language.
22:04:50 <imode> one thing I dislike about this approach is 1. there's no clear method on storing a graph (many approaches, not all of them ideal), and 2. the rewrite mechanism involves some behind-the-scenes work.
22:07:20 <imode> there's no machinery that directly correlates to a graph rewrite language. string rewriting and turing machines/queue machines are pretty much hot swappable.
22:08:18 <arseniiv> labeled metavertices and oriented metaedges are an interesting problem, it’s not immediately obvious how to construct them for abstraction not to break when matching them
22:10:05 <arseniiv> (also for the sake of fun I’d consider hyperedges. They don’t seem to make things much easier nor much harder)
22:10:18 <arseniiv> (I mean, as a primitive)
22:12:01 <arseniiv> I see a metavertex as a core which encodes its label and has e. g. a special vertex to which “edge hubs” are connected
22:12:05 <arseniiv> hm no
22:12:58 <arseniiv> maybe we should delineate parts encoding different things by several graphs not occuring in any other place
22:14:50 <arseniiv> like K_n are not hard to match but K_n ⊂ K_(n+1)
22:15:42 <arseniiv> K_m,n promise more, K_(m+1),n seems not to be a subgraph of K_m,(n+1) if m ≠ n?
22:18:14 <int-e> hmm what if you allow n = 0
22:18:48 <int-e> (which has the odd property that K_(m+1),n doesn't have any vertice with degree m+1)
22:19:04 <int-e> vertex
22:22:57 <arseniiv> yes we should pick m, n large enough
22:29:47 -!- sprocklem has joined.
22:30:47 <imode> there's a lot of issues imo with this kind of rewriting scheme, which is why I've strayed away from it.
22:31:01 <imode> a lot of times in rewrite languages it's hard to get your code to _not_ do something.
22:31:09 -!- Sgeo__ has joined.
22:31:23 <arseniiv> at first let’s encode a vertex labeled n ∈ N as A—(B—)^n where A, B are nice graphs and A has one special vertex and B has two, to link them; and encode an edge → as —C—D—, with former conventions
22:31:24 <int-e> enter conditional or constrained rewriting
22:31:56 <arseniiv> then we need to enforce constraints on A, B, C, D to not allow accidental matches
22:32:55 <arseniiv> maybe we need to add “buffer vertices” connecting to a vertex and all the edges incident to it
22:33:34 <arseniiv> or maybe they could connect closely
22:34:12 -!- Sgeo_ has quit (Ping timeout: 245 seconds).
22:34:30 <arseniiv> is it a good idea?
22:36:42 <imode> I think the whole thing is a bad idea. I keep coming back to it to see if it's a good one, but every addition is just compensating for something.
22:37:03 <arseniiv> maybe we need to add “buffer vertices” connecting to a vertex and all the edges incident to it => I mean “connecting to a metavertex and all metaedges…”
22:39:09 <arseniiv> it certainly seems to not be too effective regarding its implementation nor implementation of something in it
22:39:32 <arseniiv> to be not too*
22:39:55 <imode> linear media is always gonna win out vs. any kind of graph or tree structure.
22:40:32 <arseniiv> hm I wonder now how about undirected trees
22:41:20 <arseniiv> but maybe with a fixed finite set of labels
22:42:28 <arseniiv> oh we could simulate a tape of directed trees
22:42:56 <arseniiv> encode*
22:43:44 <arseniiv> couldn’t simulate if subtree patterns aren’t allowed
22:44:08 <esowiki> [[Byter]] M https://esolangs.org/w/index.php?diff=66332&oldid=49879 * PaniniTheDeveloper * (+1)
22:47:14 -!- lambdabot has quit (Remote host closed the connection).
22:50:00 -!- lambdabot has joined.
23:09:40 <imode> maybe I should just use a tape instead of a queue.
23:10:32 -!- Phantom_Hoover has quit (Ping timeout: 246 seconds).
23:11:16 <imode> 'last' is just "move the tape to the left" anyway.
23:11:33 <imode> hm.
23:12:51 -!- FreeFull has quit.
23:13:58 <int-e> use two stacks (a rubber tape)
23:14:07 <imode> rubber tape?
23:14:19 <int-e> you know, stretchable
23:14:26 <int-e> flexible
23:14:45 <imode> ah, that you can just arbitrarily insert under the cursor vs. moving everything over?
23:14:51 <int-e> yes
23:14:56 <imode> yeah I was thinking about that...
23:26:30 -!- LKoen has quit (Remote host closed the connection).
23:27:25 <arseniiv> hm you can use ror and rol (one of them is last) and banish pick, as you can decrement the index and shift it further until it’s zero
23:28:07 <arseniiv> though you’ll need to shift an additional copy of the unchanged index to return back
23:29:18 <arseniiv> imode: ^ if that’s of any use
23:29:56 <arseniiv> one of them is last => s/is/formerly known as
23:30:11 <imode> you know, it's funny.
23:30:52 <imode> https://hatebin.com/ysmcogarab
23:31:00 <imode> pretty much what I did. lol.
23:32:05 <arseniiv> hm seems pretty long
23:32:56 <arseniiv> though I don’t remember all the primitives, what they are this time?
23:33:30 <imode> dup, drop, swap, last, begin ([), repeat (]), if/while (?)...
23:34:12 <arseniiv> ah I mis-eyed or something, then
23:34:21 <arseniiv> there are definitions at the top
23:34:37 <imode> yup!
23:36:57 <arseniiv> (hm is sswap really swapping two top values and then getting two from the bottom? why is it useful?)
23:38:24 <arseniiv> I think I greatly misunderstand now, but maybe I’ll get it soon
23:39:42 <imode> sswap behaves like forth's swap.
23:40:08 <imode> swap grabs the top two values, then enqueues both of them but swapped.
23:40:15 <arseniiv> aaaah
23:40:20 <imode> so grabbing the last two values means they're pushed to the front of the queue again.
23:40:30 <imode> 1 2 3 sswap -> 2 1 3
23:40:42 <imode> vs. 1 2 3 swap -> 3 2 1
23:40:49 <arseniiv> and 1 2 dup -> 2 2 1?
23:41:47 <arseniiv> I was thinking today why you call it a queue automaton and didn’t even consider I should reinterpret what swap and dup are doing here
23:42:49 <imode> 1 2 dup -> 1 2 1
23:42:58 <imode> you could also interpret it as 1 2 dup -> 2 1 1
23:43:08 <arseniiv> what how
23:43:22 <arseniiv> hm
23:43:26 <imode> examine the top of the queue (leftmost item)
23:43:35 <imode> enqueue a copy of it.
23:43:37 <arseniiv> I didn’t read “1 2 3 sswap -> 2 1 3” properly
23:43:52 <arseniiv> I thought you take from the right end
23:43:56 <imode> ahh, sorry.
23:44:02 <imode> my convention is always from the left.
23:44:16 <arseniiv> I should have declare mine too
23:44:24 <arseniiv> declared*
23:44:51 <arseniiv> now I think it would go smoothly
23:47:05 <imode> telnet into imode.tech, port 1024 to access a REPL.
23:47:51 -!- LKoen has joined.
23:48:05 <imode> what's interesting is that you can define `factorial` as `: factorial range product ;`
23:51:43 -!- oerjan has joined.
23:56:22 -!- LKoen has quit (Quit: “It’s only logical. First you learn to talk, then you learn to think. Too bad it’s not the other way round.”).
←2019-09-23 2019-09-24 2019-09-25→ ↑2019 ↑all