Yes, very well said. Computability cannot exceed that of a Turing machine, and the real power is in composability: encapsulating pieces of software into composable chunks ("modules") allows for reuse & powerful expression through abstraction. One could even argue that that's what programming is.

My goal is to extend this composibility into programming language, into the runtime, and ultimately right out into the end-user's hands; but I'm not trying to change computation by inventing a "more-Turing"machine.

Composable Language:

One who crafts programs ought to be able to employ whatever execution mechanisms / model he sees fit; that is, if he is really free to craft whatever he wants. Using a high level "programming language" is allowing those decisions to be made for you. I think a different paradigm of "language" is possible, where one composes it from both custom and 3rd-party pieces. This is similar to using a library (a collection of tools that can you use in your program), whereas using a static language is like using a framework (the backbone of a pre-made program that you customize via fill-in-the-blank).

Some argue that language composition is already available through code transformations (macros, A -> B compilers, runtime code generation, etc.); but that's not the same thing, because the resulting program is still operating in terms of mechanisms provided in the target language. For example, Java's "lambdas" generate full-fledged classes; and good-luck implementing closures in Java without nasty overhead tricks! Hey, if a coroutine or a generator or continuation is a perfect fit for the task at hand, and I (or somebody else) already knows how to implement it, then I should be able to use it in my programs.

Tools like JetBrains MPS are a step in the right direction, but don't solve the larger problem of runtime composition.

Composable Runtime:

Source code is not software. Source code specifies software to be generated. Editing source code creates results a new specification from when a new software artifact can be generated. What I'm getting at is being able to manipulate & compose the actual software artifacts themselves: You can have all the coding tools in the world that let you manipulate & transform your code however you like, and you still end up with a locked-down as-is artifact. However, if software itself were made to be directly manipulated and composable, then one could own his software as a USER (in the same manner that I talked about owning the design of a program as a crafter).

This requires that software artifacts be composed in a manner that they can be editable, which means building blocks that can be created / inspected / manipulated at runtime ... and we essentially end up with something like LISP or JavaScript, but with an interface to runtime software entities. And if this runtime is to be truly ownable as well, then it must apply to itself (hence title of this forum topic).

At that point, one can manipulate software entities without writing code. This becomes more true as one makes tools to assist with common tasks. The repetition of this process results in an environment where one can use user-interfaces instead of code, and where "programming language" can be less of a defined whole and more of an evolving collection of software-manipulating tools. (DSLs can be implemented as functions which take entities as their "code" and either generate new code, or interpret it directly). Furthermore, there's no reason why a software entity couldn't contain it's own engine & DSL-based code. A similar (though different) thing happens all the time when webpages dynamically load other libraries as needed.

<The below material is older and needs review / editing>

For example, one could hypothetically edit an IDE in the IDE itself ... It rather, one could edit the SOURCE for it, and that's not the same thing, because then you generate some new artifact and call it a new version of the other one generated. What I'm talking about is that you edit the IDE code in the IDE, and you IMMEDIATELY have the feature change available for use. Hypothetically, one could repeat this kind of cycle every few seconds to "sculpt" the product as desired, improving the tools fit doing so as the need becomes apparent. For that to work, you NEED a flexible runtime where the "code" is not compiled away to something else.

Also, it would sick to have all that power within a runtime, but then to have to go back to the old way of doing things to update the runtime itself; especially if that runtime is to "become" the computer. Since you cannot get away from a static base layer, a possible solution is to wrap those pieces (which need only be few and small) in the same constructs that it operates on, so that it is at least possible to edit the runtime from itself, even if it's the equivalent and viewing and modifying "assembly" code (using that term loosely) inline. This either requires that the whole thing stay running and have its definition be tied to what's in memory, or to have some hook to save it off ... However it's done doesn't matter, so long as it then can be changed if needed later without having to stop the world and recreate it again ("recompile"), or at least if the runtime state can be saved and resumed.

Also, you've really got to look at Brett Victor's examples to get an idea of what this might look like as a user tool rather than just as a programmer tool. He talks about a tool for dynamic images, but that does not escape that "programmer had to invent the world for you" situation unless the tool can be applied to itself, which cannot exist unless the artifact (not it's blueprint) is directly modifiable, which I think means that the artifact is it's own blueprint.

... Is that making a clearer picture? Without that runtime changeability, I think what I'm talking about is not much different than LISP. Though I could still talk about replacing a MOP with a self-MMOP. (but that's CLOS, which is implemented IN LISP, and therefore somewhat irrelevant anyway)
The following is the closest real-world thing (that I've found) to an "Object Oriented" model as Alan Kay envisioned it. Of note is the functional nature (no "state", no setters), the strict message-oriented model, the composability, and the efficiency (caching / parallelism): (video 1)

It's also worth noting that for any such "object oriented" system to work, the pieces must be nestable (composable) into a hierarchy, and not violate that hierarchical structure, as stated in this video: (video 2)

The second video above argues two faults in this model (that make OOP "not work") that the model in the first video seems to handle really well:
1. Objects cannot have direct references to each other. How about URLs or relative paths? (relative within the object hierarchy?)
2. Adhering to a strict hierarchy requires cumbersome "bucket-brigading" ... The model in the first video did not seem held up by it; LISP programs are made up entirely of that kind of hierarchy, and is highly composable. Also, if your "objects" (nodes) are proper abstractions of their whole subtree, then the vertical integration should flow through meaningful (rather than arbitrary) points in each abstraction layer.

Now, the objects (nodes) are not so much designed as pieces of a hierarchy, but as proper "wholes" in of themselves, independent of external context. This is what Trygve (MVC inventor) calls "restricted OO" within the DCI paradigm: (PDF 1) (see diagram on page 31)

This "restricted OO" is what allows the aforementioned kind of object-hierarchy to work: the pieces and the hierarchy can be modified separately from each other. It's like function composition, but with objects. This composition of "wholes" is what Alan Kay meant when he coined the term "object oriented", and described it as "objects all the way down".

THE BIG PICTURE HERE (or a major part of it) is that this kind of composition of parts is made directly available to the end user. This is not about composition of CODE, but of actual runtime entities that be composed and manipulated into an "object hierarchy" in much the same way that a Unix (or VIM) user can compose ("pipe") commands together. The network example in the first video is a good example of how pieces can be created and reused/re-combined without a "recompile", because they are not modules of uncompiled code, but are persistent runtime artifacts that can be composed together by any user at runtime.

Imagine a tool which visually depicts composition hierarchies of "objects" in a manner that the end user can literally build whatever they want using meaningful interaction metaphors (e.g. drag and drop). UI components are the easiest to imagine visually nesting inside each other, but I also think that behaviors could be manipulated using similar visual metaphors: If everything is composed of objects, and each object acts like a machine with an external interface ... well, isn't that what a UI is? So why not just depict objects like little machines with a UI? Each method or field would have a corresponding visual component. Interactions could be modeled using a visual analogy for wires or pipes.

This works if the objects are composable as wholes and honor "restricted OO": objects can contain other objects, and can "wire" together it's child objects. An object's internal wiring can only connect the external interface its children and provide the internal wiring behind its own external interface. A user could hypothetically look at the internal wiring of an object and see (or modify) hope button X is literally wired to internal components Y and Z.

I think that even FUNCTIONS can be deported with this same metaphor, getting at that the code itself is just nested objects. "Code" would be an object structure (AST) made up of the most fundamental operations for modifying & composing objects in the same way that the end user can. Like LISP, data & "code" & everything else is is made of the same stuff. Only it's a model that exists at runtime and exposed to the end user, rather than as just a pre-compile trick.

One side-effect of giving every object a UI metaphor is that one no longer needs to make a UI for everything; you just need an API on the form of your external object interface (considering that a while program is an object). I guess in this sense, the API and the UI become the same thing ... an AUI? :)
An alternate model would be an object-based model as described in the previous post, but objects would instead respond to messages rather than having any externally observable properties. It's like each object is a function that takes a "method name" as it's first argument, and respond accordingly. This is more in line with Kay's original object vision, and it is how Smalltalk is implemented.

This frees object implementation from any predetermined structure, so that message-passing is the only thing THAT needs to be implemented concretely. It also allows objects to have private internals and hide their implementation from the outside environment. If the model in the previous post is JavaScript objects, then this is objects described in this article.

If the innards of an object are not observable (unless that object responds to a message to list them all), then composition has to come from building generators, or code that builds the objects into a hierarchy, and customization would be a matter of tweaking that code and regenerating. However, this does not sound terribly different than the code-versus-runtime-artifact model that I'm trying to get away from :/
The following video is already after much of what I've been trying to do here: https://youtu.be/BBJWLjS0_WE

(though it only focuses on improving the world of the programmer; Bret Victor still has the best examples of implementing software from user-intuitive tools rather than just "code")
::::THIS BE THE "CONCISE DESCRIPTION"::::
LONGER DESCRIPTION (slightly outdated, but applicable)
JUSTIFICATION THEORY
:::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::

This topic has gotten confused/messy over time. So...

Goal:

A software system that can be reshaped & modified in an ad-hoc manner by the end user, and can thus be whatever you want, or be used to create tools to do whatever you want. This is achieved by building the system out of "objects" (building blocks) that the user can create, modify, and restructure in an ad-hoc manner. Everything, even the tools for creating & modifying objects, are made of these objects, so EVERYTHING is modifiable within a uniform model.



Implementation Notes To Self:

The most important thing₁ is that all software entities ("objects", in the Kay sense) are exposed such that they can be inspected / modified / created at run time, so that the end user has potential control over everything.

The other most important thing₂ is that the whole system exposes itself for direct modification. Thus, it must be composed of (or exposed via) the very same objects that it facilitates the creation & modification of. Potential options are exposing the live "machine code" for runtime modification by encapsulating it all within objects, or using JIT-compilation to model the system entirely in terms of high-level constructs (objects) which are recompiled upon modification.

The other other most important thing₃ is to only implement what is absolutely necessary to get this system up and running, and then, as a user of the system, evolve the rest through runtime modification. Indeed, the ability to do this is the very point of this whole thing.
I had previously mentioned that it almost feels like I'm reinventing LISP.

Well, after reading through this great Scheme tutorial, I've found STRIKING similarities between my implementation-so-far and how Scheme is implemented.

I'm also seeing surprising similarities between Scheme and JavaScript (especially around closures), which makes sense because JavaScript started as a Scheme implementation before having "make it look like Java" requirements mandated at the last minute. (I don't have a link, but look up JavaScript and LISP).

... I still think I'll continue my POC in JavaScript, though the Scheme tutorials are giving me some good pointers Smile
Looks good so far Smile. Keep it up!
Here is another video that HIGHLY relevant to what I'm doing: Building your own dynamic language

(that title is a gross understatement)

I think I'm after the same thing as Ian Piumarta, though perhaps through slightly different means. His crème brûlée analogy applies equally to my goals here.
shkaboinka wrote:
I had previously mentioned that it almost feels like I'm reinventing LISP.

Well, after reading through this great Scheme tutorial, I've found STRIKING similarities between my implementation-so-far and how Scheme is implemented.

I'm also seeing surprising similarities between Scheme and JavaScript (especially around closures), which makes sense because JavaScript started as a Scheme implementation before having "make it look like Java" requirements mandated at the last minute. (I don't have a link, but look up JavaScript and LISP).

... I still think I'll continue my POC in JavaScript, though the Scheme tutorials are giving me some good pointers Smile


I don't want to say I told you so, but..... Wink you should really go read that paper on scope sets I linked a little while back.
elfprince13 wrote:
shkaboinka wrote:
I had previously mentioned that it almost feels like I'm reinventing LISP.

Well, after reading through this great Scheme tutorial, I've found STRIKING similarities between my implementation-so-far and how Scheme is implemented.

I'm also seeing surprising similarities between Scheme and JavaScript (especially around closures), which makes sense because JavaScript started as a Scheme implementation before having "make it look like Java" requirements mandated at the last minute. (I don't have a link, but look up JavaScript and LISP).

... I still think I'll continue my POC in JavaScript, though the Scheme tutorials are giving me some good pointers Smile


I don't want to say I told you so, but..... Wink you should really go read that paper on scope sets I linked a little while back.


Actually, I've been meaning to come back and apologize!

After realizing the striking similarities between my code and a scheme interpreter, I've been researching further (R5RS, Racket, SICP, HtDP, and also REBOL), which has given me some invaluable hints & ideas.

I'll probably continue to implement my own system though, for two reasons:

1. The system I want to implement MUST own it's own definition, which either means bootstrapping an existing language or making my own. In either case, everything about it needs to be able to be the target of its own (or the user's) modification.

2 I have some specific thoughts for an underlying substrate suitable both for code and for high level (user) abstractions, which leads me to an "object" model. JSON is already a good start, as is REBOL (which helped inspire JSON).

By the way, you were right about hygienic macros being important; and I AM concerned with syntax (desugaring), though moreso at a higher user-end level (e.g. visual depictions and mouse movements instead of alternate arrangements of code symbols).
So here's a hypothetical picture of the system I wish to implement:
    ▪ Let A, B, C, etc. denote language-A, language-B, language-C, etc.
    ▪ Let M denote machine-language (or the language of the base machine).
    ▪ Let iXY denote a program in language X that interprets language Y.
    ▪ Let cXYZ denote a program in language X that compiles Y into Z.

1. Define A where code & data share the same underlying structure ("homoiconic")

1.b. Allow A to store, manipulate, and execute "native code" for M.

2. Implement iMA to allow ad-hoc creation, modification, and execution of A entities through direct user interaction (e.g. using a REPL).

3. Use iMA to create iAA:

3.a. Modify iMA to contain the pre-initialialized structure of A entities for iMA.

3.b. Where A does not have equivalent operations for the M code of iMA, embed the actual M code from iMA as "native code" within iAA. Specifically, embed a reference the EXISTING M code rather than copying it over.

3.c. Discard iMA. (What this ACTUALLY means is discard the SOURCE CODE of iMA: I assume that it was written using some cMXA, in which case"iMA" is referring to that source code, and "iAA" is the structure embedded within the COMPILED iMA. This compiled program sticks around, and "is" iAA, since you can now run it and modify its own M code at runtime, all in terms of A)

Important: From this point forward, any further changes can be made entirely through runtime interaction within iAA in terms of A code.

4. Use iAA to code up an operation to serialize any A structure into an external file, and then immediately apply this operation to the whole of iAA (it can see it's own entire structure). This will create a new M-executable for iAA that contains the serialization operation just created. From now on, a copy of iAA can be saved whenever runtime modifications are made.

5. Make iAA extensible in terms of A, and replace all embedded M code with A code:

5.a. Implement a JIT (Just-In-Time) compiler (cAAM) within iAA so that new functionality dues not have be coded as M code wrapped in A (as was probably the case for #4).

5.b. Modify A such that any remaining M code within iAA can be represented by equivalent A code (even if that A is a dummy stand-in for M code ... hey, that facility already exists at this point!).

5.c. Modify iAA such that runnable A code may also be backed by the equivalent M code, and so that modifying A code causes a JIT (re)-compile to regenerate the underlying M code (where applicable).

5.d. Reverse-engineer all M code within iAA (this should include the bootstrapper, the serializer, and the JIT-compiler) with the equivalent A code, but leave the existing M code as a "backing". Now ALL the code can be modified in terms of A!

-----------------

At this point, the very definition of the language / system can be changed from within the system itself, on an ad-hoc basis.

Also, multiple JIT-compilers could be made for different platforms, and would sit benign within the current "instance" of the system. Then the system can replicate itself by copying all entities and invoking the correct cAAX, and serializing-out the result; and all the compilers would go with it. ... It's like Ultron!

Another idea (that I've taken from REBOL) is to all different dialects to be embedded within the system. That is, the eval/interpreting function of the system could be a reference within an environment-chain (in the Scheme sense), and different environments could point to different eval/interpreter functions as they please, thus allowing all code "down the line" to be in terms of another "language".

(Note to self: talk about syntax in the form of visual abstractions, etc. ... it doesn't belong in this post though)
You've (still?) invented a Lisp machine, I think. I'm thinking of an anecdote I recall (but can't find right now) from somebody (I believe at MIT) who comments that they were disappointed by the inability to patch the running system when they switched from Lisp machines to an early version of UNIX.
shkaboinka wrote:
Computability cannot exceed that of a Turing machine
Just thrown in, out of context. What about Quantum Computers?
Tari wrote:
You've (still?) invented a Lisp machine, I think.

Maybe a virtual one. But as any LISP interpreter is technically also a virtual LISP machine, this statement doesn't mean much.

One of the main points is that the system provides a means for interactive coding, and for modifying executable (interpretable) code without a recompile; but more so that the code that specifies the whole system is also exposed, and subject to its own modification. Thus, you can (re)make it into anything you want, without needing any external tools. If you don't like the means for reshaping it, then you can reshape the stuff allows you to reshape it, and then turn around immediately do it the new way.

So while a self-modifying language is a prerequisite for such a system, the specific language used is not the interesting part. What's interesting is that you can reshape the system & the language into something new; but not in the sense of new SYNTAX that still runs on the old substrate, but actually being able to change the substrate itself, either as a whole or within nested contexts.

But what's yet even more interesting / important, is what this all enables from an end user perspective ... but that will have to be the topic of another post.
How this system is NOT equatable to macros (in LISP or Scheme):

Macros transform code. This must happen either all at once before the program is interpreted or compiled, or using what is essentially JIT (Just-In-Time) compilation. Either this process happens all over again every time, or the macros are permanently stripped out of the compiled program. Macros are not the code to be run, they are a shorthand (sugar) for other code to be run.

So maybe macros let you have "whatever language you want", but only on the surface. They don't let you have the MACHINE for whatever language you want. ... (well, not without layers of machines stacked on top of each other).

In my system, you can modify the underlying MACHINE so that your new syntax is directly runnable without transformation. And since the interpreter (with its underlying machine code) is an object within its own runtime, you could (for example) make a runtime copy of it, modify it, and then later apply it directly to code that implements the new syntax/semantics. Perhaps you even have a function that accepts an interpreter and some code to apply it to, e.g. invoke(engine, code).

The difference is that if you apply language C inside language B inside language A, each execution "bubble" is self-executing, and there is no translating from C to B to A, nor is it a matter of A interpreting B interpreting C. In fact, you could pull the old definition of the system out from itself, and leave C as the new definition. This is possible because the new system is "nested" in the original system in the same manner that the original system is "nested" within itself; so what's really going on is a horizontal expansion, rather than a stacked one where A hosts B hosts C... (though you could certainly still implement a stacked system within it, if you wanted).
So I've gotten carried away with the wrong thing ...

Honestly, it doesn't matter if the underlying "language" model can change easily. It does need to be possible (everything up to 3.c. of my previous outline), but nothing more.

What I'm really after is a system that is open-ended from an end-user perspective. A computer (and not just computer programming) should be a powerful tool for ad-hoc creation and composition of arbitrary models and processes. Everything that makes the system usable (e.g. all behaviors and UI) should be just another entity within the user's sandbox, rather than a dictated framework that sandboxes the user into some predetermined model (example video).

Thus, it is necessary to choose an underlying model that provides a concrete low-level representation for ad-hoc structures (e.g. any LISP will do), and best if that structure can map very directly to ad-hoc use models (e.g. objects / JSON).

The point of it being possible to change the underlying substrate, is so that an existing system can be evolved (rather than scrapped) when a newer "better" version comes about; and/or so that arbitrarily different systems can be embedded within each other (for example, downloading/ajaxing programs that know how run themselves).
Maybe I should just use Scheme!

The three big things in my way are:

1. JavaScript is immediately available everywhere, and already has support for building ad-hoc trees (of objects)

2. I want objects. On the other hand:
Code:
{type:call, func:foo, args:[a,b]} //JavaScript
(foo a b) // Scheme
(compare the AST structures, not the syntax. I'm mostly concerned with what's more complex to execute & (self)-modify)

3. I'd like EVERYTHING, including the execution engine (including built-in and user defined functions), to be part of the same unified "object tree", so that (for example) some UI could display any node and it's children and provide means to edit in place or copy (drag and drop). If the "root node" is selected, you're looking at the whole system as a single entity. (Note: using the editor to edit itself within this tree and immediately changing it AS it is being used, is one of the big ideas of this whole project)
Ok, so I've fallen in love with Scheme ...

The more I compare my stuff to Scheme's semantics, the harder it is to deviate from it. So at this point, I'm essentially starting from Scheme and considering what I might do differently.

(Alan Kay & Ian Piumarta would say to stand on the shoulders of those who came before you, so that's what I'm doing).

For example, I'll probably use a similar model for lexical scope & continuations (e.g. using an object graph rather than a stack). And true closures almost come free with that.

... I'll say more later; but in brief, I may steal that model, even if it's implemented differently (e.g. with objects)
So ... Design :: Square[1].back(2);

So I'm looking to redo my JavaScript implementation, and build upon Scheme's approach to lexical scope (environment chains) and continuations.

However, the fundamental operations on those things (e.g. get or set a variable in the current scope, evaluate an expression, call a function, capture/restore a continuation, etc.) can all be broken down in terms of basic object operations: Get a property from an object, Set a property on an object, delete a property from an object, Check the existence of a property In an object.

So maybe I can just provide fundamental object manipulation, and the rest can be implemented in the "user defined" space. This is not as efficient, but exposes more for experimentation / change later.
suggestion: treat objects as compile-time hash-tables on symbols that flatten into structs in the runtime environment? This way methods and member variables share a single namespace (just like functions/variables in Scheme), and implement all of the usual OO-stuff as macros? Then breaking type-safety essentially also requires breaking macro hygiene. FWIW: this is very nearly how C++ works under the hood, minus some added complexity to support inheritance, which is why you can use c++filt to resugar C symbols from a library listing into C++ type signatures.


Python and other scripting languages typically just go straight for runtime hashtables, which have numerous downsides, including performance concerns and type safety.
  
Register to Join the Conversation
Have your own thoughts to add to this or any other topic? Want to ask a question, offer a suggestion, share your own programs and projects, upload a file to the file archives, get help with calculator and computer programming, or simply chat with like-minded coders and tech and calculator enthusiasts via the site-wide AJAX SAX widget? Registration for a free Cemetech account only takes a minute.

» Go to Registration page
Page 2 of 5
» All times are UTC - 5 Hours
 
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum

 

Advertisement