[ / / / / / / / / / / / / / ] [ dir / random / 93 / biohzrd / hkacade / hkpnd / tct / utd / uy / yebalnia ]

/techbunker/ - Technology (Bunker)

Technology (Bunker)

Email
Comment *
File
Password (Randomized for file and post deletion; you may also set your own.)
Archive
* = required field[▶Show post options & limits]
Confused? See the FAQ.
Oekaki
Show oekaki applet
(replaces files and can be used instead)
Options

Allowed file types:jpg, jpeg, gif, png, webp,webm, mp4, mov, pdf
Max filesize is16 MB.
Max image dimensions are15000 x15000.
You may upload3 per post.


File: e5e2459cd41149d⋯.jpg (14.23 KB,338x461,338:461,alan-kay-and-dynabook.jpg)

 No.1175

What are your views on him? Is he right or just an idealist who can't actually contribute anything to the real world?

____________________________
Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.1176

Right about what? I only know him for his ideas of OOP, which are extremely different from the crap that ended up in C++ and Java and pretty nice for large-scale designs, but have a horrible tendency towards unnecessary state at the small level.

It's hard to say much more with such a vague OP.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.1179

He's pretty red pilled but his smalltalk based approaches aren't ideal. Obviously better than C but dynamically typed languages are script kiddie shit.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.1180

>>1179

>dynamically typed languages are script kiddie shit

This definitely applies to almost all dynamically typed languages that are currently popular, where basic features (static type checking, first class functions, fucking rational numbers) are missing purely because the language designer blubbed out, was lazy or just stupid. But Smalltalk and Common Lisp are exceptions I think. There the dynamic typing actually gives you something in return: The ability to develop the program while it is running. For open-ended tasks that don't have a clear solution, this is worth its weight in gold, but it definitely hurts when e.g. implementing a protocol.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.1182

>where basic features (static type checking, first class functions, fucking rational numbers)

neither of those are basic features. but i would agree typed languages are a much better design

>rational numbers

do you mean floating point or arbitrary precision? numbers shouldn't be part of the language, neither should text. but i'm talking about sane PL design which the industry isn't interested in and doesn't understand. all they care about is adding a new ad-hoc syntactic construct literally every week and "hurr durr lambdas are cool now, let's add them to Java and C"

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.1183

>>1182

I can see how lack of type checking would be seen as a (mistaken) design decision instead, but the other two are absolutely basic features. Implementing them is piss easy when you have a garbage collector and speed isn't an issue, both of which apply to the average "scripting" language. Are there even any dynamically typed non-GC languages made in the last 30 years? The only things I could think of are relics that predate C.

>do you mean floating point or arbitrary precision?

I mean rational numbers.

< COMMON-LISP-USER> (+ 2/6 1/14)

< 17/42

Absolutely trivial to implement in a GC language* and has been done a thousand times. But providing basic features and infrastructure is much less fun than fucking around with pointless minor syntax changes and jerking yourself off about The Zen of Shitlang, and it turns out that most "language designers" are more interested in the latter. Programmers are probably also averse to actually learning something, seeing how every time you shit on a language people treat it like you insulted their dead grandma.

This isn't intended as a slant against you, but the simple fact that you saw "rational number" and immediately considered floating-point numbers (or that I could have meant them) shows how endemic the problem is.

>numbers shouldn't be part of the language, neither should text.

The concept of "text" really is woefully underspecified, but you'll have to explain the numbers part. Surely you don't mean that arithmetic should be implemented in the style of Peano arithmetic through functions, right?

>i'm talking about sane PL design which the industry isn't interested in and doesn't understand

I think it's worse: It's not that they are uninterested, it's that they don't even realize there is a problem. And then buggy shitware and security breaches are "just a fact of life", probably the same way Ebola is "just a fact of life" when you regularly roll around in shit.

* It's not really that much more complicated to add them to a language based on fixnums, it's just that the inherent problems of fixnum arithmetic such as overflow are much harder to ignore when approximating rationals rather than integers. Somehow, it is easier to think "surely the number will never go THAT high" than to think "surely the denominator will never go THAT high". My guess is that through school, everyone has more experience with denominators "blowing up" during a calculation than with very large integers.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.1184

>>1183

I agree with what I think you're getting at that programs should have easy access to actual numbers (i.e, rationals, perhaps indeed all numbers should just be rational and have a 1 denominator when you want an integer instead of having 2 different types - rational and natural) and this should be the default - instead of being based around machine words. It's a huge pain to analyze machine word based programs. 99% of the time it's obvious the programmer was oblivious to how things actually work but then the code just doesn't break the way you want it to (when looking for vulns).

The moment you decide to use fixed size words (or worse, C style types that are fixed size but the size changes from machine to machine), you have to ask:

- How big are the inputs

- How big is the output

- How does this expression with 3 different types of fixed-width integers behave?

- What happens in this obscure corner case where the number is 2^N-1 or -2^N?

- Okay, the above case will never happen, but can an attacker induce that state and use it to exploit the program?

And programmers don't understand this. They don't care about "small details". They don't care that the language is full of small details which they aren't interested in using correctly. Instead, like you point out, they take it as an offence that you said the language is bad for being this way. Or they make oblivious claims like "this user identifier has to be a machine word or we'll lose performance".

>The concept of "text" really is woefully underspecified,

most correct. and much more so than most people think

>Surely you don't mean that arithmetic should be implemented in the style of Peano arithmetic through functions, right?

Almost, but that's inefficient. Binary representation is acceptable. For example:

data Nat = One Nat | Zero Nat | Empty

Empty

One Empty

One (One Empty)

One (Zero (One Empty))

(although I prefer a slightly more complicated version that provides canonical representation - e.g doesn't allow stuff like Zero Empty and Zero (Zero Empty))

And yes, then we have to use functions for everything:

abc = add(a,add(b,c))

def = add(x,One (One Empty))

ghi = sub(add(x,y),add(sub(a,b),mul(b,c)))

Which I don't see as a big deal. If anything it just points out that PLs suck and become unreadable after a tiny amount of nesting. I don't really think having infix notation is needed.

I don't want some giant piece of code like GMP together with more grammar and syntax as the implementation.

Numbers are overused anyway. Variant types can be used for almost all cases where numbers would have been used.

>I think it's worse: It's not that they are uninterested, it's that they don't even realize there is a problem. And then buggy shitware and security breaches are "just a fact of life",

Exactly. People in clownworld who have no idea what they're talking about go around saying "security is hard (TM)" while churning out the same 3 vulns for 40 years.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.1185

>>1184

That is pretty much what I'm getting at, yes. Saner foundations make for much saner (and simpler) programs. Can't get away from fixnums and overflow handling completely, sadly, but at least languages like Ada make the weirdness more explicit and allow you to kill some of it, like x -> -x not being defined for typical fixnums.

I don't really see the fundamental difference between your scheme and having (arbitrarily large) numbers in a language. The distinction seems syntactical, is that the point? I.e. arithmetic operations shouldn't be special but just behave like normal functions instead? Lisps do this for instance, but so does Haskell (whose notation you're using) so it feels like I misunderstand what you mean by numbers.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.1186

File: 208ce805f8db8ea⋯.png (46.85 KB,1045x440,19:8,go_trash.png)

>>1185

I don't see any need for fixnums at all in a new PL.

>I don't really see the fundamental difference between your scheme and having (arbitrarily large) numbers in a language. The distinction seems syntactical, is that the point?

I'm saying the language should not provide a built in "integer", or any other numeric type. Nor syntax for it. Nor operations for it. Nor the ability to define operators. Why have it built in? What if we want matrix math? Should that be built in too? Complex numbers? Where do we draw the line? For operators, why not ternary operators? Or summations/products (it could be possible if languages weren't 1d text)? Since there is no line, mainstream PLs constantly add new unneeded stuff instead of providing more general mechanisms. All I see the need for is something like algebraic data types aka variant types. Everytime I look at Go they add all kinds of ad-hoc features based on use cases. And since all the use cases are webshit development (which also means using a unix-based os), Go continues to be good for webshit/unixshit and nothing else. So what? The web will be dead in 10 years and all such languages will go with it.

Literally just opened up a random Go changelog and:

> Per the number literal proposal, Go 1.13 supports a more uniform and modernized set of number literal prefixes.

>

> Binary integer literals: The prefix 0b or 0B indicates a binary integer literal such as 0b1011.

> Octal integer literals: The prefix 0o or 0O indicates an octal integer literal such as 0o660. The existing octal notation indicated by a leading 0 followed by octal digits remains valid.

> Hexadecimal floating point literals: The prefix 0x or 0X may now be used to express the mantissa of a floating-point number in hexadecimal format such as 0x1.0p-1021. A hexadecimal floating-point number must always have an exponent, written as the letter p or P followed by an exponent in decimal. The exponent scales the mantissa by 2 to the power of the exponent.

> Imaginary literals: The imaginary suffix i may now be used with any (binary, decimal, hexadecimal) integer or floating-point literal.

> Digit separators: The digits of any number literal may now be separated (grouped) using underscores, such as in 1_000_000, 0b_1010_0110, or 3.1415_9265. An underscore may appear between any two digits or the literal prefix and the first digit.

https://golang.org/doc/go1.13

Meanwhile, Python 3 removes octal literals vs Python 2. Great, now we have N languages and we have to remember if a leading 0 matters in each of them (particularily when auditing for backdoors/vulns or even just plain bugs). Why the fuck do we even HAVE octals? Because UNIX uses them for file permissions or umask or some shit? I can't even remember the reason. But it's a use-case driven reason.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.1187

>>1186

>complains about how having to deal with different number syntaxes in languages makes auditing harder

>solution is to have every single program decide for itself what a number should be

This better be trolling.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.1188

>>1187

>complains about how having to deal with different number syntaxes in languages makes auditing harder

This example is more mostly just bloat and time wasting, but actually yes it does affect auditing as well, which clown world wouldn't know because they're too busy fixing 7th order misconceptions like CSP and X.509. When designing a non-unix OS and PL from the ground up, why the fuck would I put special syntax for not only hex, decimal, but also OCTALS? To waste everyone's fucking time?

>solution is to have every single program decide for itself what a number should be

No, the solution is for the default to be unbounded integers. It can be built into the language or not, it doesn't matter.

<thinks arithmetic is fundamental to programming or even practical programming (outside of legacy UNIX/web interfacing)

>This better be trolling.

This better be trolling.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.1190

>>1188

>No, the solution is for the default to be unbounded integers.

I agree with that but…

>It can be built into the language or not, it doesn't matter.

…you said the exact opposite one post ago.

>arithmetic isn't fundamental to programming

That's gotta be the hottest take on programming I've ever read. Find me a non-trivial program that doesn't need arithmetic somewhere.

>When designing a non-unix OS and PL from the ground up, why the fuck would I put special syntax for not only hex, decimal, but also OCTALS?

You wouldn't, there'd be a general mechanism for bases, like Common Lisp's for instance, and you wouldn't make the syntax misleading like the commonly used octal syntax. But Go isn't a fresh start (or even a good language).

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.1191

Oh yeah to correct a mistake I said in a previous post, I forgot Python 3 actually uses stuff like 0o10 for octal, as opposed to getting rid of it.

>>1190

>Find me a non-trivial program that doesn't need arithmetic somewhere.

Find me a non-trivial program that doesn't need JSON (or something with the same purpose) somewhere. Should there be built in JSON syntax? No, especially seeing as JSON is ill-specified.

>…you said the exact opposite one post ago.

Yes. For the sake of getting rid of crap semantics, the default type of number should not be machine words. That solved Problem A, which is the most obvious one.

Now Problem B is whether we want to bloat the language with some "official" number type or not.

Then if we head further into clown world, Problem C is which syntaxes should be supported?

Octal?

Decimal?

Hex?

Binary?

Thousands? e.g 9_999_999, 1_234_567_891_123, etc. latest fad for Haskell, Java, and now Go (I'm sure C#, JS, and whoever else added it too now while I wasn't paying attention)

In my school of thought that's just merging the UI into the language in a stupid way. (Same for comments. Same for names. Same for code layout.) Instead, why not have an editor that just displays the number literal however the user wants. From an auditors perspective, we never care about how the developer wants us to view his shit anyway. We may be looking at some number written in the code as decimal, but want to see it in hex or binary and would have copied into a converter or calculated in our head anyway.

Say we have an expression like this:

One (One (One (One Empty)))

It could be in the console:

<GetNumThings()

>One (One (One (One Empty)))

Or in the code editor:

>x = One (One (One (One Empty)))

Now let's say Alt+A is to cycle representations, and the selected expression is marked by [] (i guess that's how you'd tell the editor/console which expression you want to switch)

>y = (abc, [One (One (One (One Empty)))])

<Alt+A

>y = (abc, [0b1111])

<Alt+A

>y = (abc, [15])

<Alt+A

>y = (abc, [0o17])

<Alt+A

>y = (abc, [0xf])

<Alt+A

>y = (abc, [#13]) ← base 12

<Alt+A

>y = (abc, [One (One (One (One Empty)))])

And people can have as many stupid plugins or whatever to the editor they want without me having to download new "upgrades" to the core language. Of course I actually use 2D syntax because it's actually composable unlike 1D, but that's a different story.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.1192

>>1191

>Find me a non-trivial program that doesn't need JSON (or something with the same purpose) somewhere. Should there be built in JSON syntax? No, especially seeing as JSON is ill-specified.

Tons of programs don't use JSON. "Anything with the same purpose", in addition to being unclear (what's the purpose?), changes the question to one where the language arbitrarily chose something out of the things that serve the purpose. There is no "different arithmetic" serving the same purpose as standard arithmetic that is unrightfully discarded by adding the latter to your language. So this doesn't answer my question.

Again, I get the intense feeling that you say "number" and "number type" but mean something completely different like "number syntax" or "special handling for numbers". Your rant about "Problem C" (which I mostly agree with, though if there is no standardized representation I don't see how you'd exchange programs with anything that doesn't natively understand whatever the implementation uses) supports me in that.

So how would a language "without a number type" look like? Does every single program in it have to define what a number is, arithmetic, etc. from scratch out of bits or whatever other primitive? If yes, what is the gain here, especially compared to the compatibility issues and subtle errors a dozen different implementations would create? If not, what exactly would be (semantically!) different from a "normal" language?

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.1194

File: 9a97c2d5d9c1a32⋯.png (18.61 KB,601x390,601:390,_.png)

>>1192

>in addition to being unclear (what's the purpose?)

Serialization. It's either JSON, XML, ASN.1, Protocol Buffers, etc, etc.

>[This] changes the question to one where the language arbitrarily chose something out of the things that serve the purpose.

Yep.

>There is no "different arithmetic" serving the same purpose as standard arithmetic

I dunno… Why not define subtraction of natural numbers so sub(x,y) fails if y is greater than x? What's the correct way?

Why not have Naturals as well as Integers?

When inducing a protocol from types (as opposed to specifying bits or using some doodoo like Protocol Buffers or JSON), this is decent gain, because now the protocol specifies exactly what's acceptable and we don't need some comment (or clause in a spec) which everyone ignores, and we also don't need a bunch of runtime checks everywhere. Of course now we're more likely to need conversion from Natural to Integer or whatever once in a while but I feel it's more explicit and the other way around just lets lazy programmers not care about the edge cases.

We can have whatever types we want:

Integer: …-3,-2,-1,0,1,2,3…

Natural: 0,1,2,3…

Positive: 1,2,3,4…

Rational: +/- …1/1,2/1,3/1,4/1,1/2,2/2,3/2,4/2…

>Again, I get the intense feeling that you say "number" and "number type" but mean something completely different like "number syntax" or "special handling for numbers"

When talking about existing PLs, I mean the whole part of the PL dealing with numbers - the syntax, the implementation, the type the language provides to the user (i.e, ->int<- x = 3). When talking about my language, I just mean the user-defined type, and I guess whatever functions he defines on it.

>if there is no standardized representation I don't see how you'd exchange programs with anything that doesn't natively understand whatever the implementation uses

There _is_ a standardized representation, it's just this (abstractly, how it's encoded is a different matter):


x = Empty
y = One (Empty)
z = One (One Empty)

Instead of


x = 0
y = 1
z = 2

…and then it's completely up to the code viewer whether to display 0 instead of Empty

For the purpose of demonstration, if I wanted to work with existing languages, I could have a Haskell program like

f (a, b) = some_function (a, One Empty, b, One (One Empty))

and make an editor for Haskell code and provide the GUI with Alt+A explained in >>1191

But I wont do that, because that's tiny gains, it only becomes worth it in a completely different environment than standard UNIX-based OS and text-based PL.

>So how would a language "without a number type" look like?

Like Haskell or SML or Lisp without built in numbers. What I said before, the user can define his own natural type like this:

data Natural = One Nat | Zero Nat | Empty

And even positive numbers:

data Positive = P_0 Positive | P_1 Positive2

data Positive2 = P2_0 Positive2 | P2_1 Positive2 | P2_End

(The automata in pic related can help in understanding it. It's a binary representation of any number 1 or greater. It's LSB-first though!)

No builtin number library. No builtin syntax for numbers. The only types are algebraic datatypes defined by the user and which libraries he chooses to use. Numbers are built by defining algebraic data types. Same with anything else you want, lists, maps, sets, DNA, whatever, and syntax for these types can be defined separately, through editor "plugins"

without any plugins to the editor, you just see the value as it actually is, e.g:

P_0 (P_0 (P_1 P2_End))

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.1195

>>1192

>what is the gain here, especially compared to the compatibility issues and subtle errors a dozen different implementations would create? If not, what exactly would be (semantically!) different from a "normal" language?

Hrmph. It's kind of hard to go back and go over all of what lead me to this.

10 years ago I was trying to design a secure typed language. Looking at every language and seeing how they have some arbitrary crap machine-word integers pissed me off. So much that I even considered using Peano arithmetic. Later, I started to doubt this path, because Peano arithmetic is crap no matter how you look at it performance wise, and probably can't be guaranteed to be optimized out. I started to think the idea of a general language can't exist and you have to add ad-hoc crap like built in strings or ints to it. Later, I somehow met someone else working on a similar language for similar reasons. He did the same thing as me, just let the user define his own numbers however he wants. However, instead of Peano he implemented binary arithmetic. I was like sheeit why didn't I think of that. And thus the problem was solved. Anything can be implemented by the user without built in support, because it only has to be asymptotically efficient. And this covers all programming use-cases outside of DSP and a few other edge cases, which is fine. I only want the other 99.999% of the OS in my language. Polyglots burn in hell.

Some reasons I don't want built in integers:

I want the language to be as simple as possible (especially because some of the meta stuff might be complicated)

I don't want the language to have a grammar or require parsing (it's 2d like i said before. this isnt some bullshit flowcharts like DRAKON or whatever crap out there. it's 2d because this allows extensible syntax as well as being the opposite of UNIX and therefore the correct choice)

I want to reuse the types _of the language_ everywhere: for sending on the network, for embedding in documents (replace PDF and HTML), embedding in chats, types in the database are just types in the language. And I never want anyone to have to choose how it's serialized. I can send someone an integer, or I can define data Char = A | B |C | D | E .. Z and send him a list of Char. Why not. Or I can define Glyph = List (List (Bit)) and send him a list of Glyphs. See I'm trying to make a very general system here and adding integers to it as a user is a trivial use-case. The way "libraries" work is you just link to an existing type or function by cryptographic hash. I could just add a list of such hashes to a "standard library" and that would basically solve the issue of people using type NaturalNumber1 and type NaturalNumber2 and forcing the user to convert between them.

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.

 No.1196

>>1194

>the other way around just lets lazy programmers not care about the edge cases.

In fact, let me clarify that. It's not that lazy programmers are forced to care about the edge cases (they aren't, and some will still blindly copy around a bunch of conversions that can error, for example negative Integer to Natural). It's more that now they forced to tell me what the edge cases are. And they're forced to make the protocol better defined. If I had to count the times I had to consider what happens when some number can be negative or hits some other edge case…

Disclaimer: this post and the subject matter and contents thereof - text, media, or otherwise - do not necessarily reflect the views of the 8kun administration.



[Return][Go to top][Catalog][Nerve Center][Random][Post a Reply]
Delete Post [ ]
[]
[ / / / / / / / / / / / / / ] [ dir / random / 93 / biohzrd / hkacade / hkpnd / tct / utd / uy / yebalnia ]