There were decimal computers in the 60s and 70s, like the Burroughs 2500 and there were no problems running portable Cobol and Fortran programs on them. Most languages in the 60s and 70s, like Fortran, Algol, Cobol, BASIC, and Pascal don't require binary and can also run on tagged architectures.
C requires binary because it's made for the PDP-11 and everything else was an afterthought. Java mandates specific bit sizes and intentionally broken overflow instead of correct behavior. These shitty UNIX languages would have the same problem on Lisp machines and other tagged architectures. These computers are "too correct" to run C and UNIX. Requiring binary is just one of the many symptoms of C's non-portability compared to other languages. What sucks is that bad programmers blame the machine when integers are treated as integers (bignums) because their code does some shitty hack that just happens to work because the machine they wrote it on didn't trap on overflow.
Burroughs made tagged and segmented computers for Algol and decimal computers for Cobol.
https://www.smecc.org/The%20Architecture%20%20of%20the%20Burroughs%20B-5000.htm
>The descriptor was one of the most novel features of the B5000 when it was introduced twenty years ago. Indeed, Burroughs published a description of the B5000 system and titled it "The Descriptor", (subtitled "a definition of the B5000 Information Processing System"). The descriptor, used simply as an array access mechanism, allows bounds checking (done automatically by the hardware) as well as simplifying dynamic array allocation (essential in an ALGOL machine). It also allows for differentiating between word arrays and character strings, and can indicate the size (in bits) of the characters. However, it is more powerful than this.
https://en.wikipedia.org/wiki/Burroughs_Medium_Systems
>The B2500 and B3500 computers were announced in 1966. [1] They operated directly on COBOL-68's primary decimal data types: strings of up to 100 digits, with one EBCDIC or ASCII digit character or two 4-bit binary-coded decimal BCD digits per byte. Portable COBOL programs did not use binary integers at all, so the B2500 did not either, not even for memory addresses. Memory was addressed down to the 4-bit digit in big-endian style, using 5-digit decimal addresses. Floating point numbers also used base 10 rather than some binary base, and had up to 100 mantissa digits. A typical COBOL statement 'ADD A, B GIVING C' may use operands of different lengths, different digit representations, and different sign representations. This statement compiled into a single 12-byte instruction with 3 memory operands.
There are times when I feel that clocks are running faster
but the calendar is running backwards. My first serious
programming was done in Burroughs B6700 Extended Algol. I
got used to the idea that if the hardware can't give you the
right answer, it complains, and your ON OVERFLOW statement
has a chance to do something else. That saved my bacon more
than once.
When I met C, it was obviously pathetic compared with the
_real_ languages I'd used, but heck, it ran on a 16-bit
machine, and it was better than 'as'. When the VAX came
out, I was very pleased: "the interrupt on integer overflow
bit is _just_ what I want". Then I was very disappointed:
"the wretched C system _has_ a signal for integer overflow
but makes sure it never happens even when it ought to".
It would be a good thing if hardware designers would
remember that the ANSI C standard provides _two_ forms of
"integer" arithmetic: 'unsigned' arithmetic which must wrap
around, and 'signed' arithmetic which MAY TRAP (or wrap, or
make demons fly out of your nose). "Portable C
programmers", know that they CANNOT rely on integer
arithmetic _not_ trapping, and they know (if they have done
their homework) that there are commercially significant
machines where C integer overflow _is_ trapped, so they
would rather the Alpha trapped so that they could use the
Alpha as a porting base.
Having said which: I will gladly put up with the Alpha
exception mechanism as long as
- there is a documented C-callable function which
controls the integer trapping state
- there is a documented C-callable function which
controls IEEE-ish floating-point traps
- there is a documented C-callable function which
includes a barrier (can I _rely_ on signal(SIGFPE, f)
including a barrier?)