Login [Register]
Don't have an account? Register now to chat, post, use our tools, and much more.
Progress so far:

The following code will accept an array of code objects, and spit out code with nested CPS calls like everything else coded so far:

Code:
var getType = function (o) {
   var t = (typeof o);
   if (t === 'undefined' || o === null) { return 'null'; }
   if (t === 'function') { return 'native'; }
   var s = Object.prototype.toString.call(o);
   return (s === '[object Array]' || s === '[object Arguments]') ? 'array' : t;
};

var compile = function(code) {
   var calls = [];
   function getCalls(code) {
      if (getType(code) !== 'array' || code.length < 1) { return code; }
      var last = [];
      for(var i = 0; i < code.length; i++) {
         var a = code[i];
         if (getType(a) === 'array') {
           getCalls(a);
            last.push(calls.length - 1);
         } else {
            last.push(JSON.stringify(a) || '' + a);
         }
      }
      calls.push(last);
   }
   getCalls(code);
   var src = "";
   while(calls.length) {
      var c = calls.pop();
      var t = getType(c[0]);
      var s = "return window.Objects.js.tailcall(" + (
         (t !== 'string') ? "r" + c[0] :
         (c[0].charAt(0) === '"') ? "window.Objects.lookup, [env, " + c[0] + ", env.env], function (f) {\rreturn window.Objects.tailcall(f" :
         c[0]
      ) + ", [env, ";
      for(var i = 1; i < c.length; i++) {
         s += (i > 1 ? ", " : "") + (getType(c[i]) === 'number' ? "r" : "") + c[i];
      }
      src = s + "], " +
         (src.length < 1 ? "cb]);" : "function(r" + calls.length + ") {\r" + src + "\r});") +
         (t === "string" && c[0].charAt(0) === '"' ? "\r});" : "");
   }
   return src;
};


Code:
INPUT:
compile([["foo", "q"], 5, [{a:1,b:2}, "y", ["baz", "z"], "a"], "b", function x(){return "x"}])

OUTPUT (indents added manually for better visualization):
return window.Objects.js.tailcall(window.Objects.lookup, [env, "foo", env.env], function (f) {
  return window.Objects.tailcall(f, [env, "q"], function(r0) {
    return window.Objects.js.tailcall(window.Objects.lookup, [env, "baz", env.env], function (f) {
      return window.Objects.tailcall(f, [env, "z"], function(r1) {
        return window.Objects.js.tailcall({"a":1,"b":2}, [env, "y", r1, "a"], function(r2) {
          return window.Objects.js.tailcall(r0, [env, 5, r2, "b", function x(){return "x"}], cb]);
        });
      });
    });
  });
});


Notice how "compile" is NOT written in CPS style. What I can do is re-code this in code-objects, and then compile itself, and it should do that for me automatically! Smile (and then I'll do it again to make sure that it generates still works/generates the same as before)
I found another problem with this compilation, and that's that it assumes that there are no "syntax" functions (think "macros", though not quite). There are two ways I can handle this:

1. The compiler must actually lookup each function (rather than having the lookup happen as part of the compiled code), and then behave accordingly (i.e. not deconstruct arguments for "syntax" functions). This might mean that calls store direct references to functions, which might still allow those functions to change if they are referenced relative to the compiled function's scope.

2. Change semantics so that the function-caller is responsible for indicating whether argument(s) should be evaluated or not. This might be ok because the end goal is to have a view of the code that can be better than just the literal representation of things.

Edit: Another issue is that any code nested within a call to a "syntax" function would never be compiled, and there might be no way around that. This is a problem because the eval and compile functions might need to use such functions (e.g. "each" or "loop" or "let"), but that would cause an infinite loop if (for example) eval contains code that must be eval'd. But you'd think one ought to be able to LOOP and have that compile, though.

The potential solution is to try to write those functions not as "syntax" functions. Maybe instead of taking a code-block, they can take a call-back function. And maybe after that, I can somehow also support taking a code-block.

Any thoughts?
TODO: Start using "issues" feature on GitHub, instead of relying on this forum to house my thoughts & ideas.

I'd still make mention here though (and perhaps even carry on as before); but it would be better for my decisions and thoughts on them to be more easily visible as individual items tied to the project itself.

... I may even go back to the start of this thread and scrape all my thoughts into issues, for reference ...
More thoughts about my compiler problems:

1. Maybe I can forego "syntax" functions altogether, for several reason: They are not quite "macros", so why the need to emulate them? It was just so that functions could have their own special "syntax". However, I feel that it is more important for the underlying object-representation of code to be a literal one, and let the most ideal representations of things be handled at the UI level. That makes "syntax" less relevant to begin with.
2. Most instances where "special syntax" is called for, can instead be handled with anonymous functions (aka lambdas). Example:

Code:
// With "special syntax":
["each", ["lookup", "foo"], ["k", "v"], [...]]
// With function:
["each", ["lookup", "foo"], {args:["k", "v"], body:[...]}]

3. If this means embedding "functions" (which are objects wrapped around code, as above), then does compiling a function also cause any such nested functions to also be compiled? A caveat is that entities can be directly referenced in code rather than looked up by name, so am "embedded" function may just be a reference to a function that is shared by many. Maybe I can assume the now that that is not the typical case, and make compilation apply recursively to nested functions.
Progress update:

I have done away with "syntax" functions, and now "code-blocks" are just embedded directly as functions. Also, functions that do not have a parent scope set are assumed to be nested code-blocks, and are evaluated within the context of their callers (aka "dynamic scope"). For control-constructs, I've adopted the convention of actually evaluating them in the context of not the control function, but the caller of it.

Example:

Code:
['if', [..condition..], {code:[..true-code..]}, {code:[..false-code..]}]

This will evaluate either the true or false code, but only if when it's the proper block based on the condition. Also, because these code blocks lack a "parent", they assume the context that the whole "if" statement is embedded within. And this while still invoking the code-blocks as if they were just any other function.

The signifigance of this (other than that it solves the how-to-handle-embedded-blocks problem) is that the compile function (that I have been struggling with) can now compile these nested code-blocks as if they were normal functions, but it works as if it was nested code, and the compilation process does not have to know anything about it Smile
Status update:
So what the previous update means, is that I am (hopefully) VERY close to being able to have a working compiler; and from there, I can (hopefully fairly rapidly) rewrite the whole system in itself and be on the way to having it be self-defined / self-creating.

What this also means is (many things, but one of those things is) that I can in theory port the entire running system into other platforms or languages, all using the same code (including the code that does the compiling, etc.).

As a further update, from a previous post about what I'd need to do before getting this to work...
shkaboinka wrote:
There are some "base functions" which must be defined ... such as: Get, Set, Delete, Has, Type, Keys, If, Length, Call-CC, and operators like And, Or, Not, + - * / % = < >, etc.

... I've now implemented all of these, plus a few others, so I should have enough in my "toolbox" to recode the whole system in its own terms. (Note: I have alternate mechanisms in place of Call-CC).

This might mean that I can get the compiler & self-defined-ness to begin working TONIGHT. What an awesome way to end/start the year, with the beginnings of a working self-bootstrapped system!

EDIT: ... SUCCESS!!!

...Although there is still much work to do to get it to work for everything, particularly short-circuiting certain funcs to native-JS commands, e.g. "if(...){...}" instead of actually looking up and invoking the "if" function. Until then, this (for example) would create an endless loop when "if" tries to call itself instead of invoking the native "if" command.

EDIT 2: Compilation now also compiles embedded functions, which is necessary because the code-blocks within control constructs (like "if") exist in the form of embedded functions. (In these cases, it would appear that code was only getting partially compiled). I've also added a few auto-running tests that demonstrate that compiling works, even if done inline within an expression.
Update:

I've run into some issues related to endless-loops from base functions trying to call themselves (but moreso with the "lookup" func, since that is what non-native code uses to get or set ANY variables).

At first, I thought I could get around this by NOT having a compiler, and requiring the base functions to just be provided outright in native form. However, this would have to include "lookup" as well (due to the aforementioned problem), even though it could otherwise be very nicely coded non-natively in terms of the other funcs.

I've begun exploring several options in parallel:

1. Use a set a "patterns" that the compiler can use to replace calls to base functions with actual native code. The base functions can then be defined by calling themselves, and the compiler would do the magic of inserting their native implementation. This option is appealing for cross- platform design.

2. Expand get/set to use the calling-scope in place of a null container, so this can be used in place of lookup (even if just within lookup itself). I don't like how this muddies what should be a very base operation though.

3. Alter the compiler to handle lookup specially (in certain cases) by inserting native code to get values directly rather than generating code to call "lookup".

4. Alter the eval (interpreter) to handle "lookup" specially. This has issues though.

I've had good progress with 1, 2, and 3, and 3 is the one that's really showing potential. (2 has issues, and 1 cannot quite do what 3 can).

I could also apply 2 to get more optimized code after compile (and if the nested code-blocks inserted for "if" calls give me trouble), but so far I've made real progress with 3, and I'm just a hair away from it working completely (there is a small bug that I can easily step around, but I want to understand and fix it regardless).

Note: the special handling of "lookup" also applies to assign, exists, and remove (which are all augmented versions of get, set, has, and delete, because they look into the current scope and outer scope (etc) and allow multiple arguments; whereas the base funcs only deal with one property at a time in one given object).
I have now successfully began recoding the system in itself!

The exists, lookup, assign, and remove functions are now compiled non-native at startup.

(They cannot actually be fed directly through the interpreter, because the compiler short-circuits them to prevent infinite loops; but the same code can then be duplicated and fed into the interpreter, and work as expected)

From here, I can recode the rest of the system in itself (including the interpreter and compiler).

WOOHOO!

(Note: That's option 3 from the previous post, and I've deleted the others except for compile-patterns (1))
Update: I've now successfully re-coded everything but the compiler into non-native code.

The next step is to recode the compiler, and then write a function that generates the source code of the whole system from its runtime self. (The second part is the easy part).

The system will then be self-defined and self-creating ... except that it will initially have no user interface, so that will be the next major step.

EDIT: Newer Update: I've re-coded 5 out of 7 compiler funcs to non-native code.

Update again: 6 out of 7. However, the last one ("buildCalls") is huge, and I'm breaking it into pieces to make it easier (since there are usually hard to find errors in this process). I'm maybe 3 of 7 pieces of the way through it (currently trying to fix an error in the 4th piece).
What the heck is this thing?

I've repeatedly failed to provide a clear, concise, or consistent answer to that question. But since it has recently come up again, I'll give this answer another try:

----

Edit: You could also just say that I'm trying to implement the kind of system called for in the following (very short) article: http://prog21.dadgum.com/182.html

----

Imagine a programming language in which the rules (syntax, semantics, compilation process, etc.) could be changed directly within your program. Or an IDE that can be changed or manipulated by the very code that it is viewing. Imagine that you write a bit of code, and suddenly your compiler, language, or editor changes with it.

Have you ever been using a program (or app, webpage, etc.) and wished you could just do the thing you want, but it only provides the ability to do X Y and Z? Well, Imagine if that program provided a user interface that allows you to reach into the underlying program while it is running and alter its behavior, appearance, (or even the very interface that makes this possible) in an intuitive visual way.

Imagine if instead of manually coding new features into a program, you could just run some code that does it for you by programmatically manipulating the code as if it were just a big data-structure. Imagine if that code is part of the very program that it is manipulating, and can thus manipulate itself.

Imagine if you could literally just grab a function, drop other entities onto it, and watch it spit out the result. Imagine if elements of a program (e.g. functions, data, structure) could be created and manipulated in this manner. Image how that could change programming, debugging, and testing.

Imagine if such a tool existed, but you decide that there is a better way to do (or visualize) something than what it allows. Well, you just manipulate it on the fly, and BOOM! It suddenly works differently. Imagine if you were able to manipulate it so that it was able to keep a replayable history of all your actions, or spit out a runnable copy of itself which you could then save or send over the wire to somewhere else.

----

The short answer is that I'm making a tool that would make such things possible, and in a way that is open-ended and easy to modify.

The tricky thing to explain, however, is that this is not about some magic tool or language that works any specific way or enables any specific ability (like those listed above). It's about creating something so fluid that it can do or become whatever you want or need, and extend that same capability to the programmer and to the end-user.

It's about removing the barriers imposed by traditional programming languages, tools, interfaces, (etc.), and blurring the line (or distinction) between programming & user interaction; compile-time & runtime; the endless possibilities of programming & the set-in-stone rules of a programming language; the program being made & the tools used to make it.

What is a computer (and software) if not a universal machine -- a magic tool that can do or simulate just about anything, in whatever visual or interactive manner you like? And yet, one does not gain these abilities simply by having a computer at their disposal. But there is no reason why this shouldn't be, since the tool for making or manipulating software is ... software! And the tool for manipulating that software is ... software! There's nothing special about one over the other, and no reason that software cannot literally be its own tool to change itself -- and this also grant that same power to the end-user.

I could make a strong argument that part of the problem is the format & means through which traditional programming occurs (we as programmers don't subject end-users to this madness, and there is no reason that it has to be that way for us!).

However, that problem is only secondary, because even if you could discover the "perfect" language or interface or whatever for everything, it would still be locked-down to working a certain way, and the programmer or user would be constrained to whatever fixed set of tools & operations that it provides -- and thus the situation remains fundamentally unchanged.

That is the world of software & computing today. If you want to do something differently, then you must go through a long cumbersome process to invent a new language or or tool -- and all just to swap one set of constraints for another.

(This is why I have become disenchanted with designing a new programming language or compiler)

----

In my next post, I'll outline what is required to make this work.
(Continuing from previous post)

What is required to make this all work?

So this is where the "no tool" argument comes full circle, because there does need to be some sort of initial tool or environment to allow this stuff to happen in the first place. But how does one go about making a tool that does not constrain its users (or itself) to a set-in-stone set of operations through a set-in-stone interface?

We can begin to derive the answer as follows (or skip to the summary):
(I'll say "user" to refer to any consumer: human, program, or the tool itself)

The rules and interface of the tool must be changeable by the user.

Such changes must not depend on (not be constrained by) another tool. Thus, the tool must be capable of modifying itself, and must provide all necessary tools to do so (e.g. an "editor").

Since the tools for editing the tool are part of the tool itself, even those are subject to change (e.g. using the editor to edit the editor). It is thus more important to get this working at all than to try to create the perfect tool for doing so (and then mold it further from there using the tool itself).

Everything in the tool must be composed of some kind of "building blocks" that can be modified & inspected in place, or assembled into any ad-hoc representation on the fly. This leads to something like JSON (not as a textual format, but the structural entities that it represents).

The same goes for all the underlying code of the tool: If it is locked down and requires a separate tool to inspect and modify it, then the tool remains constrained by external factors. Thus, the tool must also be able to inspect & modify all of its own code on-the-fly, just as with everything else.

Thus, all code within (or underlying) the tool must also be composed of the same "building blocks" as everything else, so that it is subject to the same manipulations (etc.) as everything else (see Homoiconicity). Essentially, all code would exist in the form of an AST that can be inspected & modified on-the-fly.

The tool must contain an [url=https://en.wikipedia.org/wiki/Interpreter_(computing)]interpreter[/url] capable of running the "code" described above (since it cannot run itself in that form). This also removes the constraints that a traditional programming language would impose, because the "language" that the interpreter understands can be modified just by directly modifying the interpreter.

But wait: How can the interpreter be modified within the tool, unless it is also made of the same code building-blocks that it is supposed to run? Wouldn't the interpreter then require another interpreter to make it run? ... The solution I found is to also embed a compiler within the tool, which compiles the interpreter from building-blocks into natively-runnable code. (The compiler can also be made of building-blocks, and rely on the (compiled) interpreter to run -- just like everything else).

The tool must provide an interactive interface (UI) for visualizing and manipulating all "objects" within it. All the "code" that makes this interface work (and the operations that it provides to the end-user), can also be manipulated through this interface. In this way, the tool can really be shaped and sculpted into the best tool for any job (including that very "shaping" process itself).

In summary, the minimum things needed to be "Bootstrap Complete" (read like "Turing complete") are:

Must be fully self-contained and self-definining -- including all tools to modify the tool
Everything within the tool (code included) must be composed of common "building blocks" that can be modified & assembled on the fly.
The tool must contain its own interpreter to run the code, and a compiler to make the interpreter runnable.
The tool should be whatever minimum form of these things is required to get it to work in the first place, and then sculpted into something "better" (or anything else) from there.

See my next post for a description of how the above points can be used to bootstrap such a tool, and my "roadmap" for doing so.
This actually sounds pretty cool, and I'm only beginning to understand what you're trying to describe. Do you, by chance, have any screenshots or other previews of some kind? Of the UI, I mean.
Michael2_3B wrote:
This actually sounds pretty cool, and I'm only beginning to understand what you're trying to describe. Do you, by chance, have any screenshots or other previews of some kind? Of the UI, I mean.


I'm not even to the point of a UI yet. That is further in the process, which I have only begun to explain above.

For now, I just open a blank page ("about:blank") in the browser, open the debugger by pressing F12 (all browsers have this), and then copy-paste the JavaScript code for the whole thing into the console.

The current roadmap looks like this (with the UI coming into play at step 3):

1. Re-code everything from JavaScript (JS) to the built-in language of this tool (TL). The result is a JavaScript program that does the following in order: create basic operations and a compiler directly in JS; create higher level operations and an interpreter in TL; compile the interpreter (so that it can run); replace the compiler with one written in TL. From here on out, everything else (including the compiler) is written in TL and is run by the interpreter. ... I am NEARLY finished with this step.

(For now, I'm actually compiling EVERYTHING. The interpreter handles compiled code just fine)

2. Create an operation that generates the whole JS program (as described above) by feeding all entities in the running program into the compiler. From here on out, the entire program can be modified by interacting directly with the running program (albeit programmatically through the browser's F12 console), and then regenerating it with the added changes. (This instead of editing textual source code and manually pasting it back into the console)

3. Create an interface (a UI) which allows one to inspect and modify everything VISUALLY (instead of through the F12 console). As nice as it would be to have this earlier, it is more important that THIS step can be done as interactively as possible, because there are endless ways that the UI can evolve. (There are many possibilities for how code and tools can be visualized and designed and interacted with, and that will be a greater ongoing experiment on its own rite)

4. Add compilers for other languages and platforms. Since everything will exist in the built-in language, the only thing that needs to be recoded is fundamental operations that MUST be provided natively, and how the compiler converts instructions into the native code. (There are surprisingly few operations needed to make this all work, but that's another topic). From here on out, the entire system can regenerated for a different platform or language, and then the same exact system -- including the UI and all compilers for all platforms created so far -- can be loaded up in that new platform. This works because all the code (including the thing that generates the native start-up code that initially gets the whole thing running) is written in the language of the tool (TL).

I hope that answers your question and provides better insight into what to expect.
Update: I just (finally) finished updating the following previous posts that describe what the project is, how it works, and where it is going:

"What the heck is this thing?"
"What is required to make this all work?"
Plan / Roadmap for bootstrapping it all

In many cases, I have added missing content and removed unnecessary details to make them (hopefully) easier to follow. I think this is by far my best explanation of this project (across the board), so please give them a second look if you are interested (or if you were previously confused before) Smile
I finally finished re-coding the compiler, so now the whole system is (finally) coded in itself!

A quick refresher on where things stand:

Completed Steps:

1. Make an interpreter that runs code made up of objects ("objects-code") as opposed to raw text

2. Create an objects-to-native compiler ("native" = JavaScript for now)

3. "Bootstrap" the interpreter: Recode it objects-code, and feed that code into the compiler at startup

4. "Bootstrap" the compiler: recode it in objects-code, and ... (continued in step 5)

From this point: all further coding (i.e. remaining steps below) is in objects-code

Next Steps:

5. ... Replace the native compiler code (created manually) with the generated compiler code (obtained by feeding the objects-coded compiler into itself)

6. "Bootstrap" the whole system: Create a function which generates the entire native (JavaScript) code of the whole system, based on what "currently" exists at runtime.

From that point: all changes to the system can be done by interacting with the program (albeit through the browser's F12 console), and then saving the regenerated result (as opposed to editing JavaScript code in a text file).

7. "Bootstrap" the user-experience: Create an interactive GUI, so that all parts of the system can be manipulated as described above, but in a visual manner rather than through the F12 console.

From that point: it's essentially done! The tool can be used to explore new ways to create & interact with software in a visual manner, adapting itself along the way! Smile

8. Create new compilers & bootstrappers for other languages & platforms

From that point: the system can "teleport" to other mediums and escape the browser, and potentially reshape how software is used & created, all the way down to the metal!
So I just found this (short) article: http://prog21.dadgum.com/182.html

You could say that this project is about making the kind of system that this guy is calling for. (I've also added this to my previous post explaining the purpose of this project)
I'm still working on this whenever I have a bit of time, although I haven't been on Cemetech in a long time.

By the way, I've flipped form starting from the ground up, to creating and usable tool and working downward (i.e. start with the UI, and bootstrap it later). See: https://github.com/d-cook/Interact

Along the way, I've also created a tool that makes rendering visual content much easier by:
- Removing almost all boilerplate code
- Providing a declarative model. Just hand it an updated list of content, and the rendering updates itself.
- Easy event handlers.

Example:

Code:

// Render a filled blue circle always centered around the cursor:
var r = Render("top left", 500, 500);
r.onMouseMove((x, y) => r.render(["filled blue circle", x-10, y-10, 20, 20]));
  
Register to Join the Conversation
Have your own thoughts to add to this or any other topic? Want to ask a question, offer a suggestion, share your own programs and projects, upload a file to the file archives, get help with calculator and computer programming, or simply chat with like-minded coders and tech and calculator enthusiasts via the site-wide AJAX SAX widget? Registration for a free Cemetech account only takes a minute.

» Go to Registration page
Page 5 of 5
» All times are GMT - 5 Hours
 
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum

 

Advertisement