Professional software is typically built in layers: source code specifies what the software is/does; configuration files configure the built product; project files specify how it gets built; external "build scripts" automate the process of building, testing, and packaging-up the software product for use; and the installers are essentially a bundle of even more "build scripts" that do the work of installing.

So what's really going on here?

It's all just layers of instructions for what to do with what. In other words, it's all just a big computer program which generates a software product. Software engineering boils down to making programs which manipulate and generate other programs!

This process SHOULD afford complete control over every last detail of the software product being built; HOWEVER, the common practice for "computer programming" imposes needless layers of language barriers which highly restrict any control one has over the generated software product.

Any tool that processes input (user input, code, or configuration), also controls the grammar (i.e. expected format) for its input, and thus also defines a language for which it is the interpreter (e.g. it can read instructions and do what they say, even if those instructions are limited to setting values).

In other words, any such input (e.g. a configuration file) IS a program, but is only as capable as its "programming language" allows it to be.

This also means that any tool (program) which processes inputs of language X is also a virtual machine for language X. Going the other way, a computer processor (CPU) is nothing more than a hard-wired program that processes input in some assembly language. Ergo, programs are processors are computers ("virtual machines").

A compiler is a program which reads code in language X and converts it into language Y.

BUT WAIT, since the compiler is itself a virtual machine for language X, then code written in language-X IS a program which is executed by the compiler. Since the compiler spits out program Y by running X, a program in X is REALLY just a program which builds a program in Y.

Thus, a compiler specifies (is a computer for) a language X for which the ONLY kind of program that can be written, is a program that generates another program Y; but that Y program can only be made from the kinds of building blocks of language X. So, you really cannot write an X program with full control of what goes into Y.

MY POINT:

Rather than being limited only to the language of the compiler, or of the build-tool, or of the project-file, why not just directly write the actual program that makes the software product? This option is ENTIRELY available RIGHT NOW, so there is no excuse for a software engineer to NOT be capable of being 100% a master of his own craft (construction-wise).

THOUSANDS of programming languages exist, but they are all specific configurations of only a few core ways of doing software.

If instead of writing programs for specific compiler-languages, we start writing programs to make programs, then the concept of "programming language" can stop being a hard wired thing and become something that you cobble together however you want, for the specific software being made.

For example, a "class" or a "function" is not part of a set-in-stone grammar, but part of a code library that you import or make yourself. When you declare an entity (e.g. a class) in a compiled language, it is really a command to GENERATE that entity into the built product; but instead of a command, it may as well be a function that you call to create one, and that entity-making-function is no different than any of the other code in your program-making-program.

(JavaScript already enjoys a good portion of this style; and this is also why ES6 makes me uncomfortable. DUDE, it was already a REAL language BECAUSE it was open ended, so way to hard wire it up!)

One last perspective:

Part of the language limitations imposed by a compiler comes from the fact that it acts like a framework rather than a library. A framework is an already-written program for which you can "fill in the blanks"; but you don't have any control over what it's doing, in what order, or making changes to its rigid structure. A library is a collection of pieces that you embed within YOUR program, and thus your code drives the component, rather than the other way around. What is a compiler other than a framework? Likewise, a framework defines it's own machine, and thus its own language. You can choose to program within that model, or write your own program to build the structure/wiring exactly how you want.

Processors are computers are programs are frameworks. Come to think of it, functions are programs with their own grammar and execution as well. You can call it and deal with its side effects, or you can write code specific for your specific case. There is no lock-in here, so why should it be this way with programming languages? (After all, they're all really the same thing. To paraphrase Alan Kay, It's programs (computers) all the way down!)

Addendum:

Certainly, this may be overkill for small/personal projects; but professional software has all these layers, often with lots of common points that all have to be lined up just right, especially for different configurations, and the wiring to put into it can essentially be on par with the effort of just writing a program to do it.

A good example is WiX (Windows Installer XML), which in my opinion would be infinitely more usable as a programming API than as an XML language. For example, I ran into a case just the other day where what I needed could only be accomplished by manually tweaking the built installer with Orca. The result is a perfectly valid installer, but WiX provided NO way to override the built-in table value that I needed. An API could provide all the same "language" conveniences, but also give me ad-hoc access to the tables I want to generate.

(Related Video)
shkaboinka wrote:
Rather than being limited only to the language of the compiler, or of the build-tool, or of the project-file, why not just directly write the actual program that makes the software product.


So, I'll admit that I didn't read the whole thing and that I don't quite understand what it is you're trying to get at. I feel like the quote above is the basis of your topic, though. So, let me try and understand. Rather than write in C++, Java, Swift, Merthese or, whatever you're recommending we write programs in code that the processor can interpret without the use of "VM's"?

On the same quote, I'm confused. Aren't we writing the "software product"? We aren't writing programs that then make "software products" we are writing programs that we then call a software product.

I am not an avid programmer so I'm either completely missing the point you're making or you aren't conveying it clearly enough.
After having read this whole post, I, too, struggle to derive any concrete ideas from it. Perhaps that's not the point and this is meant as a thought-provoking commentary? Could you clarify?
A very important distinction to keep in mind is that not all programs need to do all things. Many programmers would rather have a language that limits their usage of the underlying machine in exchange for the ability to more easily indicate the end goal, rather than needing to micromanage the CPU.

Let's take a trivial example. You want to multiply two numbers on the z80 CPU. You could either implement your own multiplication routine in assembly, or you could do 'a * b' in C. Sure, you have less control over the output code with the C version, however it's immediately obvious to even a computer that your goal is multiplication. The compiler may then choose the best possible algorithm to slot into that bit of code, wheras a naive programmer might just do a loop which adds 'a' to itself 'b' times.

Additionally, the main reason why programming languages were invented in the first place was not simply to make writing programs easier. Programmers were very happy writing in assembly, as the instruction sets were designed to be easy to use directly. However, writing a program in assembly inherently ties you to that particular architecture, and often even that specific hardware configuration. Many early languages were created to allow write once, compile anywhere functionality.

The main downside to this approach is a loss of control, as the differences between machines would then need to be glossed over in order to make the language actually function. However, clearly programmers the world around have deemed this an okay price to pay, especially as computers have grown stronger and the overhead of abstractions have fallen to minuscule proportions. One of the reasons why C is so lauded is due to the fact that it exposes almost as much power as raw machine code, but is still very portable if used correctly while invoking almost no overhead from abstractions.

If there was only one computer configuration in the world, programmers could afford to write their programs directly in assembly. Unfortunately these days, the sheer number of different possible computer configurations necessitates the use of programs which generate programs, for if we did otherwise, we'd be lucky to even have anything resembling a modern desktop OS and software ecosystem.
Alex wrote:
shkaboinka wrote:
Rather than being limited only to the language of the compiler, or of the build-tool, or of the project-file, why not just directly write the actual program that makes the software product.
I feel like the quote above is the basis of your topic
Essentially.

Alex wrote:
We aren't writing programs that then make "software products", we are writing programs that we then call a software product.
Actually, BOTH perspectives are accurate. Conceptually, the Java compiler is a VM, and Java source code is a readily-executable program for it. Think about it: your code is specified in terms of a grammar (language) which is then fed into the compiler which does what the code tells it to do (interprets it), which is "make a program like this". It just so happens that the language is just such that it feels more like specifying parts of a program rather than issuing commands to generate those parts; and thus you may think of it as "being" the program to be generated. But it's just as equally a program-making-program written in a purely declarative language.

Alex wrote:
Rather than write in ..., you're recommending we write programs in code that the processor can interpret without the use of "VM's"?
Actually, it's the opposite. As stated, source code is validly also a program that issues commands to generate parts of a program, albeit in a strictly declarative way. Thus, why not just use a general purpose programming language to issue whatever commands or generate whatever content you want? (rather than a "make it look like this" language, where "this" can only be one a several pre-decided things).

If anything, I'm recommending writing a program A (in the normal sense) that literally contains code to assemble the parts of a valid executable program B, rather than writing source code directly for B. This allows you to mix whatever paradigms you want.

This is not to say that you wouldn't use layers of "language", just that what we typically think of as language could be equally substituted with code-generating libraries. Instead of a Java compiler, you'd have a Java-language library. Instead of "class Foo { ... }", you'd have "Foo = new Java.Class(..members..)". It's all functions, and you can import others, or write your own.

Certainly though, I'm NOT suggesting that you write your Java programs in Java bytecode, though.

OOP and functions allow one to form arbitrary layers of abstractions, and I think they can do the same to provide "language" abstractions, so that the code (though more code-generatey feeling) could be free to use language however you want, but still benefit from language abstractions.

And perhaps you have declarative structures in your code that LOOK a lot like a high-level language, and other code that processes those structures in a generic way; kind of like merging a compiler with a specific program that it's making, only you own all the code. What of reuse though? Well, make the language abstractions be reusable functions and classes or whatever, in the form of a library, rather than in the form of a compiler.
I get the gist of what you're saying now. We write a program in syntax derived from English, or the grammar, which we then compile into a program the computer, more or less, natively understands. But then...

shkaboinka wrote:
Actually, it's the opposite. As stated, source code is validly also a program that issues commands to generate parts of a program,


You are losing me again.

Quote:
If anything, I'm recommending writing a program A (in the normal sense) that literally contains code to assemble the parts of a valid executable program B, rather than writing source code directly for B.


Why not just write Program B? Seems like a lot of work to get an end result; writing multiple programs (I assume by your use of the bolded text) to eventually build Program B.

Quote:
This allows you to mix whatever paradigms you want.


Can you explain why this is better than just writing Program B?

I'm also not 100% on the rest of your post but like I stated, this is probably far above me.
Let me try another take on it:

Let's say you are writing a program that needs to be able to generate arbitrary sound files. There's probably a decent code library that already provides most of the functionality that you need. However, no single library provides the full list of POSSIBLE operations & representations that everyone may ever need; so perhaps you use two or three libraries and/or some of your own code to perform other kinds of modifications.

A library is nothing more than "helper code" that you otherwise could have written yourself, so choosing to use a library is also a choice for how to process and store the sound data. That's a choice you'd have to make whether or not you use a library to help. In either case, it's still YOUR program, and you have full control over what it is generating and how.

Note that although your program has full control over the binary sound file, it would use high level abstractions to do so. For example, there would be functions to combine waveforms, change pitch, etc., with the low level details hidden behind abstractions. You still have full control because you choose WHICH abstractions use.

If instead of a sound file, your program generated an executable program, then you have full control of what goes into it and how it is wired. You have control (or at least, are responsible for) every bit, every machine code instruction. Nevertheless, YOUR program (the one that generates the product) is not ITSELF an assembly program (that would be like the sound file being generated by another sound file), and you would be using abstractions to create it (as with the sound-file program), rather than just laying out each assembly instruction for the whole thing all at once.

Now, let's replace those sound libraries with "programming languages" for specifying sound data; and instead of writing a program that uses those libraries to generate the sound file, you instead just write the "source code" for your sound file, which you then pass to a compiler to compile it into an executable (playable) sound file:

Each language can ONLY allow you to use the operations and data-representations that IT provides. Now you have to choose ONE locked-down grammar to code your sound data in, and then feed it into that compiler which builds it in its own pre-decided way.

With languages and compilers, you have no control over the available building blocks, nor over any of the construction process. With a library, it's all open for free use in whatever context you want, while providing the SAME kinds of abstractions that a language does.

My point is that this is equally applicable to building an executable (aka compiled aka assembly) program. You could have all the same abstractions of any particular compiler-based language, but without being tied down to one rigid grammar over the other.

For example, you want to implement something for which the ideal solution uses coroutines, but you also have a nice error handling mechanism that relies on the ability to throw exceptions. You find these features in two different languages, but neither has both. Well shoot, it's either all of one or all of the other!

There's no reason it has to be that way. Dude, if you're the one MAKING the thing that gets generated, shouldn't you be able to create any valid executable using any mechanisms that make sense for it? Well shucks, to do that, you'd need to be some sort of software-smith (*cough* programmer) who can use or build whatever tools (abstractions) (*cough* programs and functions) that allow him to do whatever he's trying to do.
Sounds like you're talking about creating and embedding custom DSLs into your program code using libraries that make it easier to do. That is a thing people actually do, though some languages make it easier to do than others.
Unknownloner wrote:
Sounds like DSLs. That is a thing people do.

The resemblance exists, but both approaches face different directions:

What I've described is FORGOING using a compiler, and use every day programming techniques to DIRECTLY and programmatically generate the software artifact, rather than specifying it's representation in terms of a grammar/language which must then be compiled or translated.

An embedded DSL on top of Java still generates a Java program, which does nothing to escape the limits of Java. (LISP macros are no different!). My approach would still require you to code in Java, but your code would assemble the binary product directly, and thus you can make ANY valid JVM executable at all.

A DSL that maps directly to executable code is still boxing code up inside a grammar, and my suggestion is to break free of that altogether.

I get that the idea with a DSL is to get to choose whatever language aspects are most suitable for the task at hand; but what I'm suggesting is to NOT then make a grammar out of it, but instead use regular programming techniques to capture how its generated. Instead of a grammar rule, write a callable function!

Programming abstractions and language abstractions are fundamentally the same thing. One of the major differences is that a DSL gives you whatever syntax you want, but you are stuck with the same underlying model; where using programming abstractions allows you to generate whatever you want, but you are stuck with the syntax of the language you are using.

I'm taking about controlling what comes OUT as software, not controlling what goes IN. Part of the confusion is that we call both things "the software"
MODERATOR: PLEASE DELETE THIS COMMENT. I'VE MERGED IT WITH THE SUBSEQUENT ONE.

Thanks Smile
Alex wrote:
So, I'll admit that I didn't read the whole thing

That makes it hard to have a meaningful discussion. If I said "Foo is about Bar", and you say you didn't read the whole thing, but you're not sure what Foo is about, then you are asking me to read it back to you. Why should i do that when it's posted right there^?

... Sorry though, I know I'm long-winded, and I do appreciate the effort to try to follow my train of thought Smile

By the way, I have over time gone back and made corrections. So aside from just getting a stronger sense of what I said, there are several key points which I know didn't make sense with the way phone autocorrected them, but it does now.

I've also added a related video link to the bottom of the original post that I meant include to begin with
I feel like there's some weird logic going on here.

You're suggesting that we break out of the bounds of traditional programming languages (and their compilers/interpreters/assemblers) and "directly" generate the software artifact. I've put "directly" in quotation marks because that's the part that seems very hand-wavy to me.

To do work on any traditional computer, which would include generating a software artifact, eventually, you must execute a program encoded in machine code. Compilers, interpreters, and assemblers really all serve the same function: translating the description of a program not encoded in machine code into machine code (eventually). But if traditional programming languages are off limits, then by extension, every compiler, interpreter, and assembler in existence off-limits. So... what mechanism remains that will allow you to "directly" generate the software artifact?

Unless you're writing programs in machine code, you must be writing in some programming language. It seems to me that the only way you could avoid using and traditional languages would be if your program somehow defines its own langauge. But with no compilers/interpreters/assemblers already translated into machine code, your program has no path to becoming machine code and ever executing.
This seems like it woukd require being Ephraim. Build the wires, CPU, RAM and storage components from scratch, and why not the screen while at it? Then you program in binary from scratch. (those who visit CW regularly know what I am talking about)
Runer112 wrote:
...To ... generate a software artifact, eventually, you must execute a program encoded in machine code. ... But if traditional programming languages are off limits,... what mechanism remains that will allow you to "directly" generate the software artifact?
Thanks for asking, as this is an important point! Smile

Since the goal is to have full control of over building a software artifact, we are only concerned with the process (e.g. the code) for generating it, and not the underlying representation of that process. In other words, the program-generator is only meaningful as a script, and not as a generated artifact.

For example, lets say you write your program-generator in C++ (i.e. you write a C++ code that builds an executable program). To get the program that this generates, you must compile your generator code into a program, and then run that program, which in turn generates the executable program (or other software artifact) that you were interested in generating to begin with. (And the generated program or artifact is / does something useful for somebody). Whenever you need to make a change to the end-product being generated, you must update the code that generates it, compile it, and then run it, which in turn generates the updated software product. And so on. Since the only reason for running the generator is to create the software product (e.g. it is not ITSELF the software product), the fact that it runs as an executable is entirely uninteresting: you just need to run it so that the code can do what it does. And the only reason you compile it so that you can run it, not so that you can "have" it as an executable.

In other words, the generator (i.e. the code that generates the software product) only needs to be a runnable, so it may as well just be an interpreted script. If written in a compiled language, then you need two perform two steps to "run" it, which voids any size/speed advantage anyway.

For the sake of argument though, what if you want all your programming to use whatever whatever mechanisms you like; can you custom-generate all layers of programming? Obviously, it has to start somewhere; you can't have a program-generator-generator-generator-... ad infimum. But this is no different for high level languages & compilers. Was your C++ compiler written in C++? What about the compiler that built that compiler? Somewhere down the line, somebody had to write a compiler in assembly (or compile high-level code by hand). So using libraries in place of hard-wired languages, has to start with hard-wired languages; but as explained, that's of no meaningful consequence.

I do believe that this route could have been taken instead of inventing compilers, and we'd still have the same kind of "language" abstractions: a family of functions forms the same kind of grammar (i.e. the building blocks of meaning) that a family of code-rewriting rules does.

(This is also why I think the ultimate "software tool" is a dynamic-code system that is self-modifying, self-defining, and self-encapsulating, because it can expose it's own underlying representation and be the target of its own generation, while still generating external artifacts ... but that's a whole other topic.)
Here is an excellent working example of what I'm suggesting, only it's about generating dynamic images instead of programs; but the concept is the same (especially the example of the fill-in-the blank document)

http://worrydream.com/DrawingDynamicVisualizationsTalk
Project Idea:

A JavaScript library/tool to generate calculator programs. It would look similar to compiler code, except that each piece is a callable function (that includes constructors, aka "classes"). Thus, instead of being built into a program called a "compiler", you'd write your own code which uses it as a library to generate a specific calculator program.

Thus instead of "source code" of the output program in language X, you'd have a "build script" to generate the output program. My claim is that it would provide the same convenience/abstractions for "specifying" the output program, but without the lock-in of a hard-set "language"

This would be a sufficient POC to inspire other such efforts, while potentially creating something extremely useful for the community Smile

A step further would be an online tool / IDE for such. Maybe it could even integrate with SiurceCoder?

...

Also, it would resolve my inability to settle on a final set of features for a high-level calculator language (Antelope)
what you've been glossing over here are a few important barriers to approaching things this way that make your ideal an impossible one to realise (at least when targeting modern hardware and OS environments)

1: writing machine code is tedious

the amount of time required to write a program of any great complexity directly in opcodes, or even just specifying which opcodes a program should spit out to build a new program, is prohibitive to just about everybody everywhere

2: writing optimised machine code is hard

as you probably know well, there is a very big difference in efficiency between optimised and unoptimised machine code. in order to optimise well, you would need to familiarise yourself deeply with your target platform. and when you wish to output to multiple different architectures, which is generally the case these days, you would have to familiarise yourself with several. furthermore, in order to effectively apply optimisations in the context of generating a program, you would have to write a program that is aware of potential optimisations all throughout its source, leading to obscene levels of complexity. essentially, you would be writing your own compiler, having to go back and adjust the entirety of your program-generating program every time you add a module in order to maintain that awareness. at this point, you are attempting to compete with existing compilers improved by hundreds of people over the course of decades and somehow expecting to spit out better results.

3: writing flexible machine code that plays nicely with a modern OS is a nightmare

have fun writing machine code that knows how to properly interface with any OS without clobbering RAM areas, passing malformed input to kernel functions, etc. that is all.

if these three reasons haven't convinced you, what if i told you that you can accomplish much the same effect, when necessary, in a much cleaner and safer manner?

most programming languages support integration with C, yes? write the brunt of your program in the language of your choice. if you need more control over a specific area, write a function or subsystem or whatever in C instead. and, if even that isn't enough, C allows embedded asm. an old teacher of mine used to use a cute little phrase that is very applicable here: "be as general as you can and as specific as you have to be". in doing so, you will be able to write clean and maintainable, but still flexible and efficient, programs for modern environments.
shmibs:

You are essentially right that I'm talking about writing compiler code, except in the form of a code library rather than having an explicit compiler.

With that, the concerns for manipulating/generating assembly are no different than for anyone writing a compiler, and your warning is identical to "don't try to make your own language and compiler and think you can do it better than anyone what is what's already out there". Meh, there's still a place. What of Axe or ICE?

In either case, a language/compiler relieves it's users (programmers) from having to be concerned with low level details. Libraries do the same thing; the word here is abstraction. I did claim the ability to change what one wants, but that's more about possibilities than requirements for everyone using it. Those instances where you DO want to be able to do something that the language will not allow ... well, it's just code that you are importing. Except maybe you only import & use what you want. And maybe you import multiple libraries where you'd otherwise be forced to choose all of language A or all of B. I'm guess I'm taking a GIT philosophy here: most users would use standard stuff that's already well implemented; but it's open anyway.

As for reinventing ... half the same response I already gave about Axe & ICE, half about reusing existing work/paradigms. Initially this means capturing language aspects (at least their underworkings) into such a library, but afterwards reusing existing libraries. It's about open reuse and composition, not reinventing everything every time.

As for compiler optimisations that require deep coupling between features carefully crafted to work together a certain way ... you got me there. That's a legit concern, and i don't know how well that part page out. However, this is a new idea / approach, and there is a lot to explore. I'll bet that, to some extent, the code responsible for such could STILL be modularized sufficiently to fit nicely in a library; though that may mean that the relevant pieces carry added framework/dependency with them. I'll just have to see where that goes, and I'm ready to admit that I doubt that the level of optimization you speak of (e.g. inter-process analysis) is achievable. But even if the best that can be had is cookie-cutter construction of assembly code, I still think there is great value in what I'm doing.

There are admittedly limitations, which is also why I'm suggesting starting with a calculator program implementation. Maybe even start with just z80.

Beyond that, I do take your cautions seriously, and I'm interested try see to what extent a POC can deliver. I'm trying to pursue a fundamentally new approach rather than assuming that the way things are done is the best or only or right way of doing things
if all you want is allowing the inclusion of libraries with greater control than the main program then, as i mentioned in the last post, that is already a possibility in most languages, and is a thing people use heavily.

as for Axe / ICE, this is why i made the distinction of "modern systems". what you're suggesting is basically how things are done for z80 calcs, with people sharing routines / libs like quickdraw. when working with newer machines, people do much the same, but using higher level languages, dipping into lower level langs only when necessary (bootstrapping a system to work as a modern environment / writing hardware drivers / things like that). the level of control you are talking about is available to everyone, but, unless required to do so, people don't use it. the reasons for them not doing so, even when they could, are laid out in the post above.

i'm not trying to dissuade you from trying to make something new; i'm trying to help with understanding why things are the way they are. even if programs were mostly made from shared libraries, writing the code to tie them all together would still be prohibitive for most people because of the reasons above, not to mention having to rewrite things to support different architectures.
What you have been describing sounds a lot like LLVM. The input and output can be redefined at will, but there is a fixed intermediate language.

There can be only so many levels of meta. You start touching the metal fairly quickly.

The original post does not sound particularly coherent in terms of argument. It is as if you are stating that transpilation is the better way to do things since one can choose control without necessarily more work, that a desirable syntactic sugar can be defined by yourself. Metaprogramming in a way.

Basically you are stating that the reason programs can get lengthy is not because the actual solution is complex but because one must work with the dynamics/limitations of the language they are using, which can be inconvenient for certain applications. Basically instead of trying to fit a square peg in a round hole, you want to reshape the peg so that it matches the hole (or vice versa). Right?
  
Register to Join the Conversation
Have your own thoughts to add to this or any other topic? Want to ask a question, offer a suggestion, share your own programs and projects, upload a file to the file archives, get help with calculator and computer programming, or simply chat with like-minded coders and tech and calculator enthusiasts via the site-wide AJAX SAX widget? Registration for a free Cemetech account only takes a minute.

» Go to Registration page
Page 1 of 2
» All times are UTC - 5 Hours
 
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum

 

Advertisement