00:38:04 [[MikuLang]] https://esolangs.org/w/index.php?diff=172243&oldid=172012 * Frendoly * (+1426) added interpreter 00:40:41 [[User talk:Frendoly]] https://esolangs.org/w/index.php?diff=172244&oldid=170814 * Frendoly * (+104) 00:44:57 [[Talk:MicroMiku]] N https://esolangs.org/w/index.php?oldid=172245 * Frendoly * (+185) Created page with "This article was made to find a way to get it working for micropython, but since i made a interpreter for [[MikuLang]] now this article is useless, im wondering if i can get it removed?" 00:45:09 [[Talk:MicroMiku]] https://esolangs.org/w/index.php?diff=172246&oldid=172245 * Frendoly * (+88) 00:51:11 [[PMPL]] https://esolangs.org/w/index.php?diff=172247&oldid=172242 * A() * (+89) 01:04:29 [[PMPL]] https://esolangs.org/w/index.php?diff=172248&oldid=172247 * A() * (+1) /* loop */ 01:07:43 [[PMPL]] https://esolangs.org/w/index.php?diff=172249&oldid=172248 * A() * (+16) /* FizzBuzz */ 01:17:22 [[FizzBuzz]] https://esolangs.org/w/index.php?diff=172250&oldid=165431 * A() * (+244) 01:47:37 [[User:A()]] https://esolangs.org/w/index.php?diff=172251&oldid=172151 * A() * (+10) 01:47:55 [[User:A()]] https://esolangs.org/w/index.php?diff=172252&oldid=172251 * A() * (-122) 02:27:08 [[PMPL]] https://esolangs.org/w/index.php?diff=172253&oldid=172249 * A() * (+143) 02:27:26 -!- amby has quit (Quit: so long suckers! i rev up my motorcylce and create a huge cloud of smoke. when the cloud dissipates im lying completely dead on the pavement). 02:28:12 [[PMPL]] https://esolangs.org/w/index.php?diff=172254&oldid=172253 * A() * (+14) 03:23:16 [[DOESNT]] https://esolangs.org/w/index.php?diff=172255&oldid=172228 * &0 * (+1) fix typo 03:39:46 -!- sprocket has joined. 03:40:08 -!- sprock has quit (Ping timeout: 240 seconds). 03:43:34 -!- molson_ has joined. 03:46:19 -!- chloetax1 has joined. 03:48:20 -!- Lymee has joined. 03:48:54 -!- simcop2387_ has joined. 03:50:39 -!- Lymia has quit (Ping timeout: 246 seconds). 03:50:39 -!- molson has quit (Ping timeout: 246 seconds). 03:50:39 -!- simcop2387 has quit (Ping timeout: 246 seconds). 03:50:39 -!- chloetax has quit (Ping timeout: 246 seconds). 03:50:40 -!- chloetax1 has changed nick to chloetax. 03:50:42 -!- simcop2387_ has changed nick to simcop2387. 04:06:23 -!- lambdabot has quit (Ping timeout: 246 seconds). 04:08:12 [[User:Tommyaweosme]] https://esolangs.org/w/index.php?diff=172256&oldid=170825 * Tommyaweosme * (+476) 04:09:07 -!- lambdabot has joined. 04:37:18 What data structure should be used for converting between 16-bit character codes and 32-bit character codes in both directions? (The mapping will be defined in an external file and will need to be read and made into the data structure used internally) 05:26:45 [[F,u,c,k.]] https://esolangs.org/w/index.php?diff=172257&oldid=164691 * RikoMamaBala * (+382) 05:28:29 -!- scoofy has joined. 07:29:11 -!- Sgeo has quit (Read error: Connection reset by peer). 08:05:22 -!- tromp has joined. 08:23:34 -!- tromp has quit (Ping timeout: 246 seconds). 09:19:14 [[Talk:MicroMiku]] M https://esolangs.org/w/index.php?diff=172258&oldid=172246 * RaiseAfloppaFan3925 * (+370) I think you can ask an admin to delete this page 10:05:57 [[Bitflipper]] https://esolangs.org/w/index.php?diff=172259&oldid=122888 * Yayimhere2(school) * (-29) /* Interpreters */ It is infact NOT Tc, because it cannot access unbounded memory 10:29:30 -!- perlbot has quit (Ping timeout: 244 seconds). 10:29:55 -!- simcop2387 has quit (Ping timeout: 245 seconds). 10:30:36 [[Talk:Turing tarpit]] https://esolangs.org/w/index.php?diff=172260&oldid=169829 * JIT * (+323) /* What is the limit to The Turing Tarpit? */ new section 10:36:24 [[.chat]] https://esolangs.org/w/index.php?diff=172261&oldid=169578 * Yayimhere2(school) * (+2) /* Commands */ 10:40:48 [[Talk:110010000100110110010]] N https://esolangs.org/w/index.php?oldid=172262 * Yayimhere2(school) * (+216) Created page with "The proof seems self referential, because the formula for each variable holds itself, its recursive --~~~~" 10:48:29 [[Talk:]] N https://esolangs.org/w/index.php?oldid=172263 * Yayimhere2(school) * (+212) Created page with "The proof seems incorrect, because of $, which allows reading of other characters. --~~~~" 10:48:37 [[Standard Test Paper]] N https://esolangs.org/w/index.php?oldid=172264 * Yoyolin0409 * (+905) Created page with "'''Standard Test Paper''' is an esolang by [[User:yoyolin0409]]. ==Papermaking== Select some high-quality Unicode characters. These characters include "", "", "", "", "", and "". Weave the "" symbols into a long line consisting of 21 "" symbols. Weave t 10:49:09 [[User:Yoyolin0409]] https://esolangs.org/w/index.php?diff=172265&oldid=172196 * Yoyolin0409 * (+25) 10:52:14 [[Standard Test Paper]] https://esolangs.org/w/index.php?diff=172266&oldid=172264 * Yayimhere2(school) * (+9) this seems to just be like, a unicode shape? most definitely not an gosling, or atleast not one that is described, so I added {{stub}}. yoyolin, care to explain how this is an esolang? 10:53:02 -!- perlbot has joined. 10:53:22 -!- Yayimhere has joined. 10:53:28 [[Standard Test Paper]] https://esolangs.org/w/index.php?diff=172267&oldid=172266 * Yoyolin0409 * (+88) 10:54:13 -!- tromp has joined. 10:55:13 [[User talk:Yayimhere2(school)]] https://esolangs.org/w/index.php?diff=172268&oldid=168452 * Yoyolin0409 * (+215) /* Reply to Standard Test Paper */ new section 10:56:59 [[User talk:Yayimhere2(school)]] https://esolangs.org/w/index.php?diff=172269&oldid=172268 * Yayimhere2(school) * (+238) /* Reply to Standard Test Paper */ 10:58:04 [[User:Yayimhere2(school)]] https://esolangs.org/w/index.php?diff=172270&oldid=145485 * Yayimhere2(school) * (+116) 10:59:07 -!- simcop2387 has joined. 11:03:35 [[Standard Test Paper]] https://esolangs.org/w/index.php?diff=172271&oldid=172267 * Yoyolin0409 * (+1258) 11:09:48 -!- simcop2387 has quit (Quit: ZNC 1.9.1+deb2+b3 - https://znc.in). 11:10:08 -!- simcop2387 has joined. 11:10:24 [[Standard Test Paper]] https://esolangs.org/w/index.php?diff=172272&oldid=172271 * Yayimhere2(school) * (-53) /* Papermaking */ ->
11:12:44  [[Talk:Turing tarpit]]  https://esolangs.org/w/index.php?diff=172273&oldid=172260 * Yayimhere2(school) * (+133) /* What is the limit to The Turing Tarpit? */
11:17:22  [[Standard Test Paper]]  https://esolangs.org/w/index.php?diff=172274&oldid=172272 * Yoyolin0409 * (+644) /* Writing basic documents */
11:17:35  [[Standard Test Paper]]  https://esolangs.org/w/index.php?diff=172275&oldid=172274 * Yoyolin0409 * (-5) 
11:18:43  [[Standard Test Paper]]  https://esolangs.org/w/index.php?diff=172276&oldid=172275 * Yoyolin0409 * (+44) /* Writing basic documents */
11:19:52  [[Standard Test Paper]]  https://esolangs.org/w/index.php?diff=172277&oldid=172276 * Yoyolin0409 * (+38) 
11:37:30  [[Talk:]]  https://esolangs.org/w/index.php?diff=172278&oldid=172263 * PkmnQ * (+308) 
11:38:04  [[Talk:]]  https://esolangs.org/w/index.php?diff=172279&oldid=172278 * Yayimhere2(school) * (+188) 
11:44:33 -!- Yayimhere has quit (Ping timeout: 272 seconds).
12:24:00  [[Special:Log/upload]] upload  * PrySigneToFry *  uploaded "[[File:QianJianTec1767615761041.png]]": It's just a polynomial, what harm could it possibly have?
12:57:29  [[Crypten]] M https://esolangs.org/w/index.php?diff=172281&oldid=166924 *  * (+11) Fixed broken link
13:09:52  [[Polynomix]] N https://esolangs.org/w/index.php?oldid=172282 * I am islptng * (+125) Created page with "Polynomix will be a powerful computer language designed by islptng. Maybe it'll be implemented in Rust (I'm not sure.)"
13:46:13 -!- tromp has quit (Quit: My iMac has gone to sleep. ZZZzzz…).
14:30:05  Celebrate Mungday! Hail Eris!       😇
14:59:50 -!- tromp has joined.
15:27:29 -!- amby has joined.
15:43:26 -!- ais523 has joined.
15:44:06  The interview between Daniel Temkin and yayimhere is now online: https://esoteric.codes/blog/yayimhere-interview
16:03:44  [[Talk:MicroMiku]]  https://esolangs.org/w/index.php?diff=172283&oldid=172258 * Ais523 * (+535) why not merge and redirect?
16:23:05  [[User:Aadenboy/Countable]]  https://esolangs.org/w/index.php?diff=172284&oldid=172182 * Aadenboy * (+1033) okay this is much better. I like this
16:24:04  [[FISHQ9+]]  https://esolangs.org/w/index.php?diff=172285&oldid=66257 * DockedChutoy * (+371) 
16:26:40  [[User:Aadenboy]] M https://esolangs.org/w/index.php?diff=172286&oldid=172147 * Aadenboy * (+0) formatting
16:33:43  [[User:Aadenboy/Countable]] M https://esolangs.org/w/index.php?diff=172287&oldid=172284 * Aadenboy * (+30) /* Commands */
16:36:00  [[User:Aadenboy/Countable]]  https://esolangs.org/w/index.php?diff=172288&oldid=172287 * Aadenboy * (+60) /* Commands */ extremely esoteric
16:37:14  [[User:Aadenboy/Countable]]  https://esolangs.org/w/index.php?diff=172289&oldid=172288 * Aadenboy * (+30) /* Commands */ 4-6 commands
16:37:17 -!- korvo has quit (Quit: korvo).
16:37:37 -!- korvo has joined.
16:42:32  [[Abacus Computer]]  https://esolangs.org/w/index.php?diff=172290&oldid=171562 * Timm * (-12) 
16:42:46  [[Abacus Computer]]  https://esolangs.org/w/index.php?diff=172291&oldid=172290 * Timm * (-14) 
16:43:47  [[Talk:Turing tarpit]]  https://esolangs.org/w/index.php?diff=172292&oldid=172273 * Corbin * (+123) /* What is the limit to The Turing Tarpit? */ Five!
16:56:33  Interesting article. Says much more about Temkin than yayimhere though.
17:34:38 -!- impomatic has joined.
17:35:06  korvo: I learned quite a bit about both of them, I think
17:35:27  although I'm already fairly familiar with Temkin's style
17:36:13  interestingly, you can view both sides of the interview as being an exercise in extracting unintended/unintentional meaning from things
17:36:34  (which is not necessarily a bad exercise! it's an entirely valid source of new ideas)
17:37:06 -!- Yayimhere has joined.
17:37:09  Hello!
17:37:14  hello
17:37:17  how are you all doing?
17:37:18  we're discussing your interview
17:37:21  oh
17:37:23  wow
17:37:26  what a surprise
17:37:29  my email client notifier actually worked
17:37:34  great!
17:38:04  I sympathise with the point of view of taking one aspect of something and really focusing on it to see how far you can get
17:38:33  in one of my own esoteric.codes interviews, I mentioned about how some of my languages were ideas extracted from a bigger, unfinished language
17:38:51  oh which one?
17:39:20  i didnt notice that in the 2017 one if its that one
17:39:51  the second one, in the section talking about three star programmer
17:39:57  but it was just a mention rather than the main point of the section
17:40:03  yea
17:40:08  makes sense
17:40:28  err, it's in the *second* section talking about three star programmer, sorry, I missed that there were two of them
17:40:46  its ok
17:42:43  what did you think of the interview?
17:45:07  it gave me a lot of insight into your languages
17:45:29  I was thinking that your languages often contain interesting ideas that weren't well-explained, and realised that I often have problems explaning my own ideas too
17:45:45  often I can't get my point across even despite having had a lot of practice
17:45:48  ideas are a strange thing
17:46:36  and can be hard to describe, as you said
17:47:21  sometimes I have problems describing my ideas even to myself
17:48:05  oh, thats interesting
17:49:10  I feel like it sometimes takes months to shape an idea into a space where I can understand/describe it properly (although this has mostly been happening with non-esoprogramming ideas recently)
17:51:11  I guess the sort of esolangs I like have fewer moving pieces to interact with than practical languages do, so there are fewer interactions that need to be explored
17:51:31  yea that makes sense
17:54:17  are there any other thoughts you have on the interview
17:56:15  I worry that Temkin is sliding into the Lex Friedman style of interviewing. It didn't really feel like he was doing anything investigative.
17:57:14  hm, interesting
17:57:54  Yayimhere: I was interested that you were interested in An Odd Rewriting System, I didn't expect it to be high up the list of my languages that other people liked
17:58:31  but I guess it's connected with the way it was made: I noticed a common aspect of esolang ideas I had that made programs hard to write
17:58:40  so I wanted to write an esolang about that one exact problem, to really focus on it
17:59:00  ais523: oh. i had actually thought it was pretty high up the list. Whats interesting to me is as you said the concept
17:59:03  [[User:Aadenboy/Countable]]  https://esolangs.org/w/index.php?diff=172293&oldid=172289 * Aadenboy * (+99) /* Memory */
17:59:11  and so a reaction of not really understanding it is connected to that – I didn't really understand the problem either, so I wrote an esolang
17:59:25  and found one solution but it might not be the best solution (it would be interesting in a way if it is, but I suspect it isn't)
17:59:56  it was also just one of the first languages of yours I came across
18:00:52  now I'm wondering which of my other esolangs were created to highlight a problem – is it just Feed the Chaos? I can't think of any others offhand
18:01:15  I cant either
18:01:23  Globe?
18:01:25  perhaps
18:01:36  korvo: I'm not sure that interviews necessarily have to be investigative – giving the interviewee space to talk about what they want to talk about is often enough
18:01:47  i agree
18:02:05  although maybe the ideal is to present new points of view for the interviewee to think aobut
18:02:17  Globe is more an exploration of a solution than an exploration of a problem, I think
18:02:26  ais523: If it's not a conversation then what's the point of the second person?
18:02:30  but I guess that's a way of highlighting a problem in its own right
18:02:38  korvo: at least in this case, visibility
18:02:41  I think he certainly did bring me ideas I hadn't though about before
18:02:53  ais523: Oh! Then *the entire enterprise* is wrong and backwards.
18:03:45  but it also helps the interviewee to organise their thoughts
18:03:51  The *entire problem* with Lex Friedman and similar interviewers is that they provide a platform without any insight or nuance. Friedman uploads 2hrs of their guest ranting, punctuated every 30min by a fresh one-sentence question.
18:05:03  I'm reminded of the debates in which election candidates try to convince people to vote for them: those technically have an interviewer but their role is intentionally minimal, only there to set topics (and occasionally to do fact checks)
18:05:36  Temkin atleast certainly sparked some ideas I hadn't though about before
18:06:26  there is a long-running program in the UK called "question time" where they invite a member of all the major political parties, and sometimes a celebrity or two with unusual political views, and ask them questions which are basically there to set a topic on which the panel expresses their own viewpoint
18:06:39  ais523: I guess that I think that organizing thoughts is something a person can prompt themselves to do. When I was in high school, as part of debate and speech, I was taught to interrogate my own position. These sorts of self-questioning setups are, at least to me, a necessary part of writing blog posts.
18:07:01  and this is valuable because it serves as a pretty reliable way to understand the views of the people that you're voting for
18:07:08  and what other possible views might exist
18:07:38  We can't do that in the USA because our political candidates are too stupid and the First Amendment ensures that we can insult them for it. The UK has trouble admitting that their king is unelected; meanwhile in the USA we famously disqualified a man from office because he could not spell "potato".
18:08:04  i have to go now but I will be back
18:08:17  Yayimhere: I'm glad that it was a good experience for you. I invite you to blog more often and explain your work.
18:08:45 * korvo &
18:09:49  the King simultaneously has, even in theory, both a very large amount of political power and almost no political power: he has some very wide-ranging abilities but isn't supposed to use them except on the advice of the government, which effectively make them the government's powers
18:10:02  (and the "isn't supposed to" is actually officially documented somewhere)
18:11:11  so in practice the role turns into "person who officially interprets what the government's intention is"
18:11:26  it is not obvious that this needs to be an elected role
18:11:38 -!- tromp has quit (Quit: My iMac has gone to sleep. ZZZzzz…).
18:34:33 -!- tromp has joined.
18:49:26  [[User:Aadenboy/Countable]]  https://esolangs.org/w/index.php?diff=172294&oldid=172293 * Aadenboy * (-240) remove unnecessary command
18:53:25  korvo: i dont really have anywhere to blog about my languages
18:55:26  but it do see the gain in doing it
19:01:12  hey, ais523, are you still trying to prove Annihilator's computational class? or are other things occupying your mind
19:01:23  I haven't looked at that problem in a while
19:01:34  normally, if I don't make progress on a computational class issue for a while, I just give up until I have new ideas
19:02:01  k
19:02:11  i was just wondering since I was reading the page
19:04:45 -!- Yayimhere has quit (Quit: Ping timeout (120 seconds)).
19:05:42 -!- Yayimhere has joined.
19:06:45  [[User talk:Aadenboy/Countable]] N https://esolangs.org/w/index.php?oldid=172295 * Yayimhere2(school) * (+164) Created page with "I like where this language is going! Keep at it --~~~~"
19:29:13 -!- Yayimhere has quit (Quit: Client closed).
19:32:50  I could have sworn that I'd told them about Neocities and a few other options. I can understand the psychological desire to put barriers in front of ultimately-undesirable goals, though.
19:33:59  writing a blog is one thing, getting people to read it is another
19:34:26  I'm fairly well-connected in that respect, but even so I don't think all that many people read my blog (it is hard to tell because many of the requests to it will be from AI scrapers)
19:36:22  Sometimes the point of the blog is not so that people proactively read it, but so that you can retroactively hand them an article when they are loudly wrong.
19:36:40  Other times it's cathartic to get a short story or essay out of the mind and onto the page.
19:36:42 -!- Lord_of_Life_ has joined.
19:37:31 -!- Lord_of_Life has quit (Ping timeout: 264 seconds).
19:39:32 -!- Lord_of_Life_ has changed nick to Lord_of_Life.
20:03:30  [[User talk:Aadenboy/Countable]]  https://esolangs.org/w/index.php?diff=172296&oldid=172295 * Aadenboy * (+287) thanks!
20:05:52  [[Livefish]]  https://esolangs.org/w/index.php?diff=172297&oldid=146676 * DockedChutoy * (+5) fix
20:21:49  it's weird to think, anything you write onto a blog, now will become some company's AI's training data
20:22:06  with so many AI scraping bots seeking content
20:23:18  Why is that weird? The law on it was settled two decades ago and the practice was standardized three decades ago.
20:23:22 -!- somefan has joined.
20:24:26  It is interesting how training-time LLMs are now an audience worth considering. People have historically not appreciated my blog posts because they don't really like my POV, but an LLM doesn't care and may even learn something by reading. (Humans famously don't read much of what they claim to read, you see.)
20:25:33  it's "weird" because part of my brain becomes part of some AI's brain by "learning" my thought patterns
20:25:39  More important is that people not upload stuff to GitHub if they aren't prepared to have their stuff used for Copilot training. For me, this is largely *funny* because most of my code is bespoke to the point where it's not useful for corporations; but also the few things that matter are uploaded elsewhere.
20:25:56  so you're "influencing" the AI. it's like a separate audience...
20:26:21  their AI could learn "bad" things from you
20:26:33  there's no supervision in this web scraping
20:26:45 -!- somefan has quit (Remote host closed the connection).
20:26:55 -!- somefan has joined.
20:27:08  at least some reputable companies use a selected, reviewed data set for training, not random ad-hoc internet stuff
20:27:59  [[Apraxia]]  https://esolangs.org/w/index.php?diff=172298&oldid=170908 * Yayimhere2(school) * (+42) /* Examples */
20:28:10  [[Apraxia]]  https://esolangs.org/w/index.php?diff=172299&oldid=172298 * Yayimhere2(school) * (+0) /* Examples */
20:28:33  [[Talk:Turing tarpit]]  https://esolangs.org/w/index.php?diff=172300&oldid=172292 *  * (+633) /* What is the limit to The Turing Tarpit? */
20:28:51  Well, they use Common Crawl, see commoncrawl.org for more details. It's random ad-hoc stuff that people are sharing with each other; maybe it's popular, maybe not. It's preferable to a high-Reddit diet like the one that induced glitch tokens in GPT-2.
20:30:11  [[Apraxia]]  https://esolangs.org/w/index.php?diff=172301&oldid=172299 * Yayimhere2(school) * (+215) /* Examples */
20:34:12  I've considered designing a server to detect various styles of scrapers and send them information that poisons the model in various ways, to make it possible to subsequently check to see who was doing the scraping by prompting them with the trigger phrase and seeing which ones return poisoned results
20:34:55  scoofy: Here's how I think of it: an LLM is a bag of sentences. When you reach into it with a given context, you can pull out any sentence in the training data which matches that context, as well as many similar sentences which might occur in future training data. The controls we have as writers are to put certain sentences out there and hope that they get into the bag, or to withhold certain sentences from the public to make them less likely.
20:35:09  I put many more words in the comments here: https://awful.systems/post/5211510
20:36:42  ais523: https://iocaine.madhouse-project.org/ is what many folks are using. You can use awful.systems as an example domain; in some search engines like DDG/Bing there are still good results, but Google no longer returns useful results from that domain at all.
20:36:57  ais523: for scrapers, random replace some words in your text with the N-word
20:37:21  scoofy: 4chan already appears in training data~
20:37:58  korvo: I'm not sure how good Iocaine is at actually poisoning training data, as opposed to merely being useless – perhaps it's pretty effective though (it is named after a fictional poison, after all)
20:38:29  Good Night *
20:39:04  Peace.
20:39:42  ais523: It has to be manually filtered by humans. It's not as useless as Glaze or Nightshade, for which there are automatic tagging-and-cleaning pipelines!
20:40:51  korvo: ah, I was more thinking about "assuming it isn't filtered, will it have a substantial impact on the LLM's output"?
20:41:30  but it's based on markov chains, which will end up generating fragments of plausible sentences very often because that's what they do, so maybe a query to the LLM will match something that randomly appeared in a Markov chain and the LLM will think that the rest of the Markov chain is a good continuation
20:41:46  a.k.a. hallucination
20:42:27  LLM hallucinations are pretty different from that, they normally consists of statement that assume that a pattern continues, when the pattern doesn't actually exist
20:42:38  ais523: In general, low-perplexity text doesn't appear to harm training. There's a paper with a title like "Textbooks are all you need" which shows that the most important training data is high-perplexity textbooks.
20:42:41  or does exist but not in that context
20:43:22  If you train the LLM *only* on low-perplexity inputs then there is a ceiling to the learned complexity. (...Phrased like that, maybe it could even be a theorem of the PAC framework?) That's the so-called "model collapse" that folks sometimes discuss.
20:43:48  I was thinking more like, making the pages you serve to scrapers all contain a particular false sentence, and then see whose LLMs end up believing the sentence
20:43:58  especially if it's something that's harmless and plausible, but wrong
20:44:03  sort-of like trap streets on maps
20:44:40  scoofy: A confabulation, or what folks call "hallucination", is due to the fact that natlangs all contain words for dualizing/polarizing/inverting a concept: hot and cold are a good example from biology and physics.
20:45:47  So you get — cannot stress enough that this is what they really call it — "Waluigi paths", which are relatively likely paths that can get an LLM to completely flip its polarity with regard to a concept under discussion. This is broadly called the "Waluigi effect".
20:46:33  maybe comes from the fact that training data has polarity shifts in comparisons
20:46:47  For me, a much neater explanation is that an LLM *must* confabulate sometimes because it's a finite pile of weights trying to model a nearly-infinite world; there's no way that every true fact (and *only* true facts, defying Tarski and Gödel somehow) can fit into only a few GiB.
20:47:15  scoofy: It's because the highest-probability answers to any yes/no question are "yes" and "no", polar opposites.
20:48:44  korvo: the "ideal" for an AI model with an LLM-like interface would be for it to be a lossily compressed collection of a statement of facts, with the lossiness not mattering in practice (or causing the AI to say that it didn't know)
20:48:52  There's also epistemological hurdles; https://lobste.rs/s/yykymj/hallucinations_are_inevitable_can_be#c_aexu7v covers those and links to more.
20:48:58  I think it's theoretically plausible that one of those could exist in a few GiB – although I also think that LLMs are not that
20:49:18  stochastic parrots
20:49:48  ais523: Yeah. We know that, regardless of architecture, it's not possible for any finite pile of facts to generate only the true facts about natural numbers; that's just Tarski's Undefinability. So even this sort of ideal model is still just a compressed Wikipedia.
20:50:25  [[Index php]] N https://esolangs.org/w/index.php?oldid=172302 *  * (+583) Created page with "'''Index php''' is an esolang made by [[User:]]. == What and why == Index php is a random idea  had in mind (kinda) based in a [[Minsky machine]]. It's because  had no idea what to do. Also, it is his second esolang in 2026! == Commands, i guess == * {{cd|ADD [X] [Y]}}: Adds 
20:50:33  scoofy: Yes, but you have to actually read Gebru et al for the nuance. A parrot doesn't just emit one token; they emit a *path* of tokens. It's the same bag-of-sentences model I mentioned earlier!
20:50:44  that's eventually anything gravitates to, when it's based on webscraping. a compressed Wikipedia + Reddit + Quora
20:50:57  now I'm wondering what an LLM trained on only Reddit would look like
20:51:13  [[User:/esolangs]]  https://esolangs.org/w/index.php?diff=172303&oldid=171964 *  * (+15) 
20:51:25  I don't even know what proportion of Reddit is serious discussion and what proportion is shitposting and memes
20:51:30  llama from meta spits stuff quoting from reddit
20:51:38  but would expect the LLM to roughly match it
20:51:41  their scrapers definitely seen reddit
20:51:49  oh yes, but they have other sources too
20:51:54  at least one version
20:52:03  quotes quora as well
20:52:06  and other references
20:52:20  so in the end... those kind of LLMs tend to be Internet.zip
20:52:20  ais523: GPT-2 and GPT-3 are good hints. A now-classic explanation of the "SolidGoldMagikarp" meme exists at (sigh) LW: https://www.lesswrong.com/posts/aPeJE8bSo6rAFoLqg/solidgoldmagikarp-plus-prompt-generation
20:53:13  spits out something in a second, that you could have found with 1 min of googling
20:53:35  scoofy: They have to be! All general compression techniques have universal properties in common. Shalizi's got a great explanation: http://bactra.org/notebooks/nn-attention-and-transformers.html
20:53:40  while boiling some cooling water in some giant AI data centre
20:53:58  stockpiling on gazillion zigabytes of RAM, pumping up memory prices
20:54:00  Oh whoops, you specifically want http://bactra.org/notebooks/nn-attention-and-transformers.html#gllz
20:55:16  scoofy: Not to defend the bubble, but data centers don't boil water. They buy standard drinking water *at market rates* and spray it into the incoming air for air conditioning. Most of it evaporates off. This is why they're so often located near rivers; they get cheap power from dams and cheap water too.
20:56:15  Like, if you want to be angry about water usage, attack golf courses. 
20:56:22  because they need those yottabyes of RAM to store Internet.zip for their AI agents
20:57:09  that's it's so fast because the (extracted) weights are already in memory. i.e. fast access
20:57:26  OpenAI's buying RAM because they want to own their own data center. *Anybody* who makes data centers needs to buy RAM. Check the secondhand RAM market if you want affordable RAM; I bought an old 150 GiB Dell workstation for $150, for example.
20:57:34  so the more memory they have... the faster they can process
20:58:23  yea, but to run AI you need like... a lot of RAM, compared to your average application
20:58:40  when checking how to run these models locally. some require quite a lot of RAM
20:59:34  No. To run *LLMs* you need a fair amount of RAM. And, actually, you can get by with only having the model state in RAM and the model weights on disk! Inference only requires a few MiB of RAM.
20:59:47  This is why they are "large".
21:00:23  Traditional AI schemes usually are less than a MiB. They had to be! We've been doing image classification since the 1960s. We've been doing speech synthesis since the 1970s.
21:01:39  well, everything could be cached from disk, of course
21:01:47  probably they don't do that for performance
21:01:57  ais523: Ugh, wrong link, sorry. You want Part 3, where they discover the habits of certain Redditors: https://www.lesswrong.com/posts/8viQEp8KBg2QSW4Yc/solidgoldmagikarp-iii-glitch-token-archaeology
21:02:32  scoofy: Yep. I'm doing experiments on that 150GiB machine. I also build stuff like the Linux kernel, systemd, and Firefox, which can't be built on their target machines either. A *lot of things* need high-RAM machines to build!
21:02:45  [[PMPL]]  https://esolangs.org/w/index.php?diff=172304&oldid=172254 * A() * (-157) /* Calculator */
21:04:34  At any rate, OpenAI's products need RAM *on the GPU board* so that the GPU can quickly access it, and that is *not* in competition with consumer RAM markets. What actually happened: Micron's winding down their Crucial consumer brand and this is raising prices because there's now less competition.
21:04:43  [[PMPL]]  https://esolangs.org/w/index.php?diff=172305&oldid=172304 * A() * (+18) 
21:05:31  [[Special:Log/move]] move_redir  *  *  moved [[EmojiStack]] to [[Mojifunge]] over redirect
21:05:31  [[Special:Log/delete]] delete_redir  *  *   deleted redirect [[Mojifunge]] by overwriting: Deleted to make way for move from "[[EmojiStack]]"
21:06:23  [[EmojiStack]]  https://esolangs.org/w/index.php?diff=172308&oldid=172307 *  * (+2162) Removed redirect to [[Mojifunge]]
21:10:31  korvo: right, the problem is that less memory is being produced because one major manufacturer's entire memory-production capacity was bought up
21:11:16  ais523: Well, it's been *allocated*. It hasn't actually been *paid for*. Big difference.
21:11:47  I was hoping that Micron would ensure that this was being paid for in advance, especially as I expect some of their few remaining customers to go bankrupt at some point (maybe soon)
21:12:18  I'm more wondering about what's going to happen to all this neural-network hardware when the bubble bursts
21:12:25  But yeah, the secondhand market hasn't seen a shift. I don't have a problem with people insisting on fresh DIMMs, but it's just like with new cars: you're paying that premium because it's new and you lose 20% of the value the moment it's driven off the lot.
21:13:11  it doesn't possibly make sense for it to be needed in this high quantity *even if* LLMs turn out to be successful and long-lived, people will work out the largest model they need and use just the resources on it that are needed
21:13:54  at least for me, with respect to memory and data storage, second-hand doesn't make sense because by the time people stop using the memory/storage it is normally exponentially smaller than things that are cheaply currently available
21:13:55  Well, it's not neural-network hardware. It's matrix-multiplying hardware. Maybe some more specialized groups like Coral will have trouble selling their TPU-on-a-stick, but Google's TPU business has only grown with time.
21:14:35  specifically dense matrices, right? spare matrices need different algorithms, so that reduces the use cases somewhat
21:14:47  * sparse matrices
21:14:57  If you're thinking of nVidia, rumor is that the GPUs they're selling to Microsoft, Google, Oracle, and Coreweave aren't really suitable for GPGPU workloads. They're more like Bitcoin-mining ASICs; they *could* be reused but they're somewhat specialized and have shorter projected lifespans.
21:15:06  and the number format may not generalise well either, neural networks often use very low-precision numbers
21:15:20  Yeah, dense matrices. Like, Coral was originally targeting image-classification workloads IIRC.
21:18:38  in any case, my opinions/predictions about the future of technology is "LLMs are a dead end that will never be substantially more useful than they are today (where their usefulness is somewhat limited), but neural networks / machine learning in general are useful and probably underutilised"
21:19:22  even so, I'm concerned about the quantity of fast neural-network hardware, because most plausible applications for them don't need to be at that kind of scale (even LLMs almost certainly don't need to be – for most of the tasks at which LLMs are good, smaller language models would also be good)
21:20:20  I would point out something new in every era of language modeling, going all the way back to Markov. LLMs have given us the ability to compare sentences for semantic similarity, and more generally to embed sentences into a vector space over floats; it's not nothing!
21:20:55  r/counting breaking ChatGPT's token inference is both amazing, and extremely plausible – the entire subreddit is almost entirely based on comment volume
21:21:20  (IIRC I intentionally contributed exactly 1 number to that count – but I don't use Reddit nowadays)
21:22:54  if the state of the art becomes something other than attention-transformers, will people come up with a new term or will they keep calling everything "LLM" and obfuscating the difference?
21:23:21  I think that the main problem with LLMs is that the products based on them are making horribly false and misleading claims. More generally, the project of robotics/AI is to create artifical laborers without rights, which we should reject on multiple moral grounds.
21:27:05  thoroughly uninterested in arguments about attention-transformers that rely _solely_ on the consequences of being a finite system, unless the speaker is trying to argue for duality and/or biological hypercomputation
21:28:37  kind of reminds me of the "we know a bunch of strategies to prove P!=NP that cannot possibly work because they relativize, in some sense, and P=NP under some oracles"
21:29:58  that P≠NP situation is one of those results that makes the situation so much harder to resolve – we have a proof that entire classes of P≠NP proofs cannot possibly work, but it is not powerful enough to prove that P=NP even nonconstructively
21:30:14 -!- tromp has quit (Quit: My iMac has gone to sleep. ZZZzzz…).
21:30:50  it's not possible for P=?NP to be proven formally undecidable, right? because doing so would prove that no polynomial-time algorithm for solving NP-complete problems exist, and thus that P≠NP
21:30:59  sorear: Personally I'm pretty sensitive to the difference because I study Mamba, RWKV, and other recurrent/transformers hybrids. I think that it's nice to have a basis for confabulation that makes oracles and genies impossible, even though I know that it won't convince everybody.
21:31:00  and so if P=?NP is formally undecidable, there will be no way to prove it
21:32:16  fwiw, Hofstadter (when discussing the Church-Turing thesis) brings up the possibility of biological hypercomputation (not as something he believes in, but as something he wasn't sure he could rule out definitively)
21:32:51  it's possible that P=NP but no correct algorithm can be proven correct for all inputs
21:33:31  sigma_2
21:35:01  Whether P=NP is arithmetic, I thought? So it *has* an answer. Maybe we don't have a strong-enough number theory yet.
21:36:39  hmm, the case of an algorithm that is correct but cannot be proven so is interesting – I've seen similar situations before
21:37:18  in the case of P=NP, such algorithms would practically be very useful because they cannot give the *wrong* answer, only a correct answer or a "don't know" (because checking if a purported solution to an NP problem is correct is fast)
21:38:56  there's a fairly standard approach, iterate over a program index and a runtime and stop when one of the programs outputs a satisfying assignment
21:39:24  easily proven to be correct and, conditionally on P=NP, runs in time of some polynomial
21:40:37  I don't read Russian, but I gather that Levin's entire research programme worked that way. First, show that a brute search is complete and correct; second, show that it is optimal; third, show that it is NP-complete.
21:41:06  So P vs NP is purely about determining the runtime of those algorithms; it's a fine-structure question about the Polynomial Hierarchy.
21:41:16  sorear: oh right, because for any given problem in P, this only has to search through finitely many programs and the number of programs it has to search doesn't depend on the input
21:41:23  so the result is polynomial-time but with a terrible constant factor
21:41:38  and if P≠NP, this algorithm is still correct but it isn't polynomial-time
21:44:05  the problem with "prove brute force search is optimal" is that it generally _isn't_, the exponential time hypothesis is a subtler statement than that
21:46:27  yes, we know an algorithm that solves NP programs in polynomial time if P=NP, but this is useless for two reasons, one is that even though it's polynomial time it's quite slow for our hardware, the other is that most likely P≠NP
21:47:56  sorear: Right, and even folks like myself who are skeptical of SETH are still usually willing to concede ETH. AIUI we have no evidence against ETH, and instead we have stuff like phase transitions in k-SAT.
21:48:13  not only is it polynomial time, it also gives you the lowest possible polynomial degree
21:48:47  I was planning to do something similar in a golfing language – run all possible evaluation orders in parallel
21:49:32  because often when you're writing a golfed program, some of the evaluation orders terminate and others go into infinite loops, so this would guarantee that the program would terminate if there was any evaluation order for which it terminated
21:49:48  (in code golf, correctly behaving programs are almost always expected to terminate)
21:51:32  Tangent: Nobody has implemented [[Pola]] yet. If you can implement any NPC problem in Pola then P=NP. I'd expect any true believers in P=NP to jump at this opportunity to do some descriptive complexity theory with witnessing programs.
21:52:36  fwiw, I think I assign a higher probability to the possibility that P=NP than most computer scientists do (although it's still fairly low)
21:52:51  there are so many cases where things turned out to have a lower complexity than expected
21:54:53  I'm one of those Bayesian freaks, and my prior is a composite of several surveys; I'm 99% sure that we are in either Minicrypt or Cryptomania based on empirical evidence. This is a relatively weak belief, so I could be moved by evidence, but it's above the magic threshold of 7/8, so I hold it.
21:56:56  oh yes, my beliefs about P=NP are fairly weak and could easily be moved by evidence – but on the other hand, I'm not expecting substantial new evidence any time soon
21:57:37  Tangent to LLMs: My P(doom from AI) is too small to numerically estimate. It's dominated by e.g. P(doom from nuclear apocalypse), which is like 0.5%. I think people panic too much about black-swan events while ignoring the underlying patterns and implied required maintenance of societal infrastructure.
21:57:37  so I have a complexity question. your input is a Catan board (hex grid) of unlimited size and the information of which edges have a road by the yellow player. The number of roads is also unlimited, unlike in real Catan and its extensions. Is there a polynomial time algorithm to find the longest path of roads that doesn't reuse any road? 
21:57:39  [[User:RaiseAfloppaFan3925]] M https://esolangs.org/w/index.php?diff=172309&oldid=172161 * RaiseAfloppaFan3925 * (-133) 
21:57:45  I was brought up to believe in Bayesianism, although that mostly just left me questioning it a bit
21:59:15  this is an easy problem in real Catan, which is limited to 14 road pieces per player, and I think even in all Catan expansions though I don't actually know most of them, heuristics work well enough for those small inputs. but I don't know a general polynomial time algorithm, nor can prove that it's NP-hard
21:59:16  korvo: the way I see it is that a) the theoretical risk from a sufficiently smart AI is very large, but the odds that such an AI actually exists or could be created short term is very small; b) in addition to risks from underestimating AIs, there are also risks from overestimating AIs, and those could potentially be much larger (but the odds of them being apocalyptic are quite low)
21:59:54  I live in a long-term earthquake zone (Cascadia Subduction Zone) and so I need something like Bayesianism to manage the existential dread from the Floor of Damocles.
22:00:00  ais523: hehe, that's https://xkcd.com/552/
22:00:20  like, if a country assumes an LLM is smarter than humans and decides to put it in charge of the government as a consequence, that could have huge consequences if the LLM
22:01:05  * if the LLM isn't particularly smart
22:03:24  I'm going to call that "probably NP-complete" as a variant of the planar longest path problem but I haven't looked closely at that recently enough to know where the cutoff is
22:04:02  b_jonas: Does it have to be a usable road? Like, does it have to stretch from port to port?
22:04:14  korvo: no
22:05:02  sorear: "planar longest path"? I'll try to search for that, good idea
22:05:48 -!- pool has joined.
22:06:22  ais523: That's understandable. I think that your framing, which I've heard from other folks, is 100% reasonable. At the same time, there's a parallel framing where we talk about e.g. P(doom from pyramid scheme). We won't go extinct from a pyramid scheme, but in 1997 one nearly destroyed the government of Albania!
22:07:09  So should I say that P(doom from pyramid scheme) is high because government leaders are stupid enough to do it again, internationally, or low because pyramid schemes are obviously silly and we're all more reasonable people than that?
22:09:05  Statistically "giant meteor" is surprisingly high on the list of most likely things to kill any given person
22:09:39  b_jonas: I think that this is techically a "longest trail problem", where a trail is a path that doesn't reuse edges but can reuse vertices. Not sure how you feel about that. I am not sure whether it's NP-hard, but it probably is reducible to longest-path by putting some restriction on vertices.
22:10:38  sorear: Exactly! So should we talk of P(doom to me, personally, because of my personal choices) or P(doom to a country because its leader was influenced by something on their phone)?
22:10:47  catan roads are a cubic graph, you can't reuse a vertex except at the beginning or the end because that would require a degree >= 4
22:11:07  Or P(doom to somebody, somewhere, alive today)
22:11:54  subcubic
22:12:03  Hey, that'd work. So the longest trail can't be longer than the longest path + 2. Nice.
22:12:44  giant meteors are less worrying nowadays than they historically were because a) governments do actually check for them and would probably compete to being able to take credit for preventing doom from them, and b) the more devastating a meteor hit would be, the easier the meteor would be to spot and thus the earlier we could do something about it
22:13:34  korvo: on the hex grid these are almost the same, because every node has at most 3 degree, so you can only reuse nodes at the ends of the path, so the length can differ by at most 2.
22:13:36  if we spot a meteor a long distance away, its trajectory only needs to be changed very slightly to prevent it colliding with Earth, so a comparatively small intervention would be sufficient
22:13:51  yeah, what sorear says
22:14:37   So should I say that P(doom from pyramid scheme) is high because government leaders are stupid enough to do it again, internationally ← arguably, with LLMs, government leaders are actually doing that at the moment
22:14:40  I also like the astrological analogy because it turns out that there are more interstellar comets than we expected, so our P(ancient aliens) should actually have been much higher from a Fermi/Drake approach. But they don't get close to Earth either, so maybe there's a more universal P(things come close to Earth) that we can use as a generalization.
22:15:05  but I'm hoping that the damage will be confined to a somewhat suboptimal allocation of resources
22:15:22  Similarly, maybe there's a P(doom from leaders looking at phone) which is more general than P(doom from BTC prices) in El Salvador or P(doom from Stable Diffusion images) in USA.
22:18:39  ais523: Yeah, that's where I am right now too. The pattern of taking Softbank money, taking Saudi money, and finally hitting a wall is well-documented and inevitable at this point; there's simply not a spare trillion USD worth of spare wealth to turn into a spare trillion USD worth of silicon monocrystal.
22:19:45  So our grey-goo scenario ends in the same place as my yeast during the pre-pancake period: out of food, unable to expand, ready to be converted and eaten in turn.
22:20:07  I have seen a conjecture that at least some LLM providers are using a strategy of racing to become too big to fail before they actually fail
22:20:20  OpenAI, for sure.
22:21:52  At this point, OpenAI's actual failure is partially like Microsoft's, where self-cannibalization is inevitable due to stagnant monopoly, but also increasingly like Enron's, a staggering amount of book-cooking that destroyed one of the big international auditors in turn. Not sure if Enron's taught outside the USA.
22:22:25  at least in the UK, I think most people have heard of Enron and have a basic idea of what happened to them, but don't know the details – at least I'm in that situation
22:22:50  The dude that unwound Enron is currently in charge of unwinding FTX. I think it will be an amazing cap to a career if he's appointed to unwind OpenAI.
22:23:00  it is sometimes hard for me to know what situation the typical person would be in, due to not being a typical person myself – but I have to guess whether my atypicalities are relevant to the situation
22:23:41  Enron was a power utility in Texas. They cooked their books. Their auditor helped them cover up the books. That's really all that matters; it was a *big* fraud, mostly.
22:23:56  as someone directly affected I could probably know more about them than I do
22:25:17  now I'm wondering what the incentives are, for someone engaged in accounting fraud, to do it to a small extent rather than a large extent – it's well-established that for most crimes you want such incentives to exist, to discourage criminals from deciding to go all-in once they've decided to commit crimes in the first place
22:26:07  korvo: ok, I think you're right, so apparently it's NP-complete to determine if a planar graph has a hamiltonian circuit, which is an old result from the 70s but I hadn't recalled it, and I think you can do a polynomial reduction from this to the catan longest road problem, so that one is NP-complete too.
22:26:14  there's the "here's one of the great natural language training datasets" angle, the "here's the blood that CAISO's market rules were written in" angle, and the "corporate fraud" angle, the last of which I know the least about
22:27:43  b_jonas: I was just about to reply! So I think I've informally sketched that it's NP-complete. The missing piece is how to ensure that there aren't any trails which are longer than the longest path but built from the *second-longest* path. I think that we can do a poly-time reduction: when doing the NP-complete longest-path search, we can find all longest paths for free, so let's just find all of the paths within length 2 of the longest path.
22:28:33  I think there can only be poly many such paths, so just iterate through them and make all of the longer trails; there's only six possibilities per path, right? So that's a poly-time reduction.
22:28:34  the annoying part will be length-matching the embeddings of edges
22:30:35  i think that a lot of assumptions about how, specifically, AI doom plays out were established in the 20th century and became entrenched with less actual information than we have now
22:32:52  Yep. Offering professional services, I'm constantly bumping against the Computer Fraud & Abuse Act (CFAA), which the USA established as a response to a film called "Wargames" which is basically about a teenager SSH'ing into the Pentagon and launching nukes.
22:33:38  (It's not a good film. If you want something from that era, "Hackers" is a standard recommendation. It's also dated but at least it's got better representation of actual hacking and social engineering.)
22:34:49  Similarly, almost all AI doom discussion devolves into referencing "Terminator", "Terminator 2", or "The Matrix". And it's all built upon Asimov and Dick and Heinlein and Bradbury, which in turn was built upon "Metropolis" and "Rossum's Universal Robots".
22:35:07  arguably a machine which can hallucinate on any subject _is_ an AGI as the term was originally defined, illustrating the limitations of the concept
22:36:24  fwiw, I've considered for a while that corporations are, in effect, artificial general intelligences
22:36:40  they are powered by human thought, which makes them not count from many people's points of view
22:36:59  but sometimes they can act as though they hold opinions that don't match those of any of the people present, and they can certainly take actions that reasonable humans likely wouldn't take
22:37:00  At a former employer, P(doom from AI) was not a serious topic, but P(person is killed by cobot) was a real thing we discussed. I'm told that P(person is killed by high-speed swinging arm) is a real thing too, although fortunately I've not had one of *those* jobs. Yet.
22:37:52  you can have a lot of people communicating and form an emergent system out of them, and not really have much of an idea of how the system as a whole will behave
22:38:00  ais523: Charlie Stross, myself, and a few other Awful Systems regulars have discussed this many times. The consensus is that selling shares was the tipping point; the East Indies Trade Company was the first paperclip-maximizer.
22:38:03  …perhaps this would be an interesting esolang idea
22:38:30  Similarly, we locate the Singularity sometime in the past. Stross puts it near the beginning of the Industrial Revolution IIRC; I put it in the 1910s or so, near quantum mechanics.
22:40:35  if by "high-speed swinging arm" you mean "sailboat boom"
22:43:24  sorear: Oh wow. I kind of love that? I was thinking of the welding and assembling arms in a car factory but now I'm also thinking of big looms. I see robotics, cybernetics, and AI as the same thing; I'd be willing to think of it as stretching further back, too.
22:43:37  [[Language list]] M https://esolangs.org/w/index.php?diff=172310&oldid=172236 * Buckets * (+12) /* P */
22:44:05  [[User:Buckets]] M https://esolangs.org/w/index.php?diff=172311&oldid=172237 * Buckets * (+11) 
22:44:23  [[Phurb]] N https://esolangs.org/w/index.php?oldid=172312 * Buckets * (+782) Created page with "Phurb is an Esoteric Programming language created By [[User:Buckets]] in 2020. ] {| class="wikitable" |- ! Commands !! Instructions |- | "" || Print What is within The Quotes, their representations. |- | m = || Represent the Variable m as whatever US On the Other side Of t
22:46:50  i spent a week on a ship with the boom at head height, did not get paid
22:47:27  Terrifying and frustrating.
22:48:57  b_jonas: Oh! I'm sorry, 3 × 3 = 9 possible paths, not 6.
23:01:55  [[Talk:Turing tarpit]]  https://esolangs.org/w/index.php?diff=172313&oldid=172300 * Corbin * (+487) /* What is the limit to The Turing Tarpit? */ Machine or language?
23:05:09 -!- b_jonas has quit (Ping timeout: 252 seconds).
23:11:40 -!- Sgeo has joined.
23:13:48  korvo: I've seen you claim a few times that a programming language must be a language over an alphabet – I'm not sure I agree
23:15:14  although it maybe comes down to whether or not there's a distinction between a program and a description of a program
23:15:42  if a program isn't represented as a string of symbols, we have to convert it to one in order to be able to describe it to a computer in order to have it executed
23:24:55  ais523: I'm saying it for the benefit of the youngsters, to force them to clarify their thinking. I'd hope that my structuring of the page, so that we have many different kinds of computational systems and different metrics for each of them, is open enough to accomodate more non-languages.
23:27:02  Here, the clarity is in realizing that a BF machine must have eight opcodes, but a BF monoid might have smaller rank.
23:31:44 -!- b_jonas has joined.
23:33:04  korvo: such as the classic wrapping-BF technique of implementing a - as 255 +s
23:33:48  Yeah, that works.
23:56:00  or even without wrapping, you can have -< as a single builtin instead of - and < separately, I think that's a well-known trick
23:58:07  then rewrite < to + -< and rewrite - to -< > and add an extra > to the start of the program so you don't fall off the start of the tape
23:58:24  the whole BF minimization page is full of people starting like this and then going off the rails
23:59:12  https://esolangs.org/wiki/Simple_translation is an attempt to make sense of the mess
23:59:27  and might correspond to korvo's concept of language rank
23:59:45  And [[monoid]] is an attempt to make sense of [[simple translation]], since so much of that is actually unproven and imprecise.