static allocation means that the thing being allocated exists as part of the program to begin with. There is no runtime work to "get" memory, because it's just using the memory that is part of the program, as if it's literally part of the code. But "static" also has other implications, such that something that is "static" exists as one copy that never gets recreated or goes away. Static initialization means that the assignment is not done as part of the program instructions, but that the compiler takes the initial value being assigned, and just puts that value into the program where the variable is located, so that it just "starts" with that value.

The setup I am using for namespaces is quite like java packages in style; but more so just a way to apply a "name" to different areas. As far as how things will come together in the end ... well, it's going to assume that it's all going into one program. However, it will be organized in a way that it can easily be picked apart and be put back together ... having the compiler be able to act as an API means that you can take the result of the compilation and do something with it. Also, you can attach variables and methods and classes etc to arbitrary addresses, to say "There is a byte located at this place in memory; just trust me on that. Pretend it's a variable ... this is where the OS stores _ ". Or you can declare a parameterless function and say "But don't make the body for this; it already exists at [some certain address in memory]". And so on. The assembly code output will use logical labels for things that should not make it to hard to parse through and see what's what. I'm not sure if that's what you meant by packaging... but native interface will be at the machine level, but can be tied to variables and functions etc.
Would it be possible to do this for GlassOS when AHelper releases a beta?

Can I suggest a name? OK, so OPIA sounds like the beginning of opium. So I go on Google Translate and translate "opium" to Malay. What does it turn up? "Candu." I think that is a cool name for a language.

Link:
http://translate.google.com/#en%7Cms%7Copium
seana11 wrote:
Would it be possible to do this for GlassOS when AHelper releases a beta?

Can I suggest a name? OK, so OPIA sounds like the beginning of opium. So I go on Google Translate and translate "opium" to Malay. What does it turn up? "Candu." I think that is a cool name for a language.


Quoting this, because it appears to be an "empty" post to at least myself, Forty-Two, and shkaboinka (and probably everyone else?). Removing the Google Translate link from the quotation "fixes" this...

Edit: Since he fixed it now, I'll use this post to say that this looks like it could be pretty cool.

I assume you're going to write the 83+ compiler first, and then extend it to the 82, 83, 85, and 86? It would be nice to have one language that ran on all z80 calcs without minor tweaking (z80 Basic, for instance, isn't very different between 82 -> 84+, but there are new commands and things).
That's not a bad idea! And yeah, I know what it sounds like Smile
It means Object-oriented Pre-Interpreted Antidisassemblage. .... Antidisassemblage has become kind of a relic now (and a flop; oy ... that's why OPIA exists), and THAT name was arbitrary. I suppose I can call OPIA it's own language now. ... I keep having people ask me what I'm going to "call it" ... erm, OPIA?

However, I do like "Candu"! ... It has a positive implication to it ("can do!"), which totally agrees with the goals of the language. You really can do anything with it at all! I will think about that, but I like that so far. ... It may be hard to let go of OPIA, since the project has been tied to that for so long. I suppose the project can keep that name, and I can change the language-name.

...by the way, anyone who wants some more in-depths about the compiler theory, See my similar discussion on Omnimaga
Could you cross-post that over here so that others, like myself, can see it?
"...

Language aside, I can talk compiler-theory all day ... I have my own technique for Tokenization (inspired by Java's approach to enums) that I think is the stuff ... it seems like a small part of the process, but using final class instances for tokens gives you some solid tools that can make a compiler amazingly fast (and cleaner)!

...

...instances and tokens... Yes. This is why I did not actually use the Java ENUM; but my Token class contained a whole list of static final instances for all the keywords and operators, and some "signals" used by the compiler; For identifiers ("names"), it would store an internal tree of Token objects for each identifier it found. When it encountered one, a method would feed it into the tree, and if it existed already, it would return THAT instance; otherwise it would make a new instance and put it in the tree, and return that. That way, comparing names, keywords, operators ... anything... was a matter of straight-up reference comparison ( == rather than .equals). For numbers and strings (and other literals) though, I just let it make new ones. Anyway, each token object contained a final int value that flagged what "kind" of token it was, another final int that stored bit-flags (or for number tokens, stored the numeric value), and a string value that contained the actual string-value of the token. That might sound like a lot of work, but each final instance is declared with the right values in the first place, and identifiers get certain values set ... and then there were methods that looked at that kind of information, so you could just get a token and ask "is it an operator? is it an overloadable operator ... with 1 or 2 arguments (or both)? is it a keyword? is it a control-flow construct? is it numeric?" ... etc, and those checks would not have to check if it was one of some 50 tokens, but just a simple value or a bit or something.

As for OOP on everything else, I had OOP constructs to represent classes, functions, variables, blocks of code, statements, expressions, etc., so pretty quick I could get a whole structure of a program into a very easy to manage tree-like structure, and easily go "Ok, let's look at all the classes and make sure they don't have any circular inheritance, and that they behave together". I had constructs for DataTypes too, which were flexible (polymorphic; some were just primitive Tokens; others were classes; others were complexities of arrays of things). One thing I may do differently this time though ... I went back and made everything that could be the "entity" referred to by an identifier to inherit from "Entity", so that expressions and variables and so on had direct references to "entities". Then I could go back through and replace all the references to Identifier tokens with the actual things they were referencing, without damaging the structure of the parse-tree ... that was actually a bit of a nightmare, because it had to be in a very careful order so that one could identify inner/inherited namespaces properly ("a.b.c"); and the way it handled "private" and "protected" stuff was a bit interesting (it used the "invisible" technique, so things were suddenly "undefined" rather than just restricted). ... But I've done more study about ways to use a symbol table, and I think it would be fine to just stick with that

...(Was told that the "Dragon Book" was prerequisite to making a good compiler)...

Yes, thank you. I've found many compiler books, and most of them were way outdated and all about the importance of using tools like "Lex" etc. ... Anyway, I've only ever really found 2 books very useful for me, and that was one of them. I at least glanced at every page, skimming over the stuff that was repetitive, ignoring what I wasn't into (i.e. JIT and Parallel stuff ... that's not happening on z80); but I totally soaked up some good stuff from it that I wasn't finding elsewhere!

The other book, which I think is the best book in the world for what it's about, is "Modern Compiler Implementation In Java" by Appel. That book showed all the good stuff about polymorphic language design, data-flow analysis, register-coloring, etc. Honestly, I don't quite remember what came out of the Dragon Book for me, just that it was helpful and I wouldn't have found it elsewhere.

There have been ideas I've come across on my own as well though, only to discover that it's been talked about in not a few places (Code mutation to store variables; using jump tables; and symbol tables ... I never understood them the way the books tended to explain them, but I came up with a "great idea" about how to track that kind of stuff, and realized that it was the same thing; only I'd do it in a more OOP manner).

...But yes, references are VERY important. When I first started on compiler design, I thought I could just "figure it out"; but I was referred to the "Lets make a compiler" tutorials by Jack Crenshaw, from which I mastered the idea of a recursive decent pattern and recursive expression parsing and tokenizing (which actually are quite fundamental if you want to make a compiler, period). ...anyway, from there is was my own tinkering and thinking, and then the Dragon Book, and then the Java book, and then a bunch of outdated books that I didn't get squat from ... But those 2 books (and Jack Crenshaw, which I think is an excellent place to start) helped more than anything else could have.

---> If anyone wants to be involved more directly, I just made a FaceBook group. Just search "OPIA" <---
shkaboinka wrote:
---> If anyone wants to be involved more directly, I just made a FaceBook group. Just search "OPIA" <---


Can you post a link? I can't find it.
seana11 wrote:
DDD= Deadly Diamond of Death:

Class Player{play()}
Class CDPlayer Extends Player{play()}
Class DVDPlayer Extends Player{play()}
Class ComboPlayer Extends CDPlayer, DVDPlayer {}

Which play() does ComboPlayer use?

I haven't had the "pleasure" of working with code that has a DDD, so I don't know how most languages handle that case. However, I would think that a language should be designed such that it requires you to either override the play() method, or tell it which parent's play() method to use.

Since it's a combo player, I would expect it to provide its own implementation of play() which calls the appropriate parent's play() method, depending on the medium (CD or DVD) that it's trying to play. I think the C++ syntax for this is CDPlayer::play() and DVDPlayer::play().
christop wrote:
seana11 wrote:
DDD= Deadly Diamond of Death:

Class Player{play()}
Class CDPlayer Extends Player{play()}
Class DVDPlayer Extends Player{play()}
Class ComboPlayer Extends CDPlayer, DVDPlayer {}

Which play() does ComboPlayer use?

I haven't had the "pleasure" of working with code that has a DDD, so I don't know how most languages handle that case. However, I would think that a language should be designed such that it requires you to either override the play() method, or tell it which parent's play() method to use.

Since it's a combo player, I would expect it to provide its own implementation of play() which calls the appropriate parent's play() method, depending on the medium (CD or DVD) that it's trying to play. I think the C++ syntax for this is CDPlayer::play() and DVDPlayer::play().


But it can all be avoided by disallowing multiple-inheritance, thus eliminating unnecessary complexity.

Some comments after reading your blog:

1. Why is a boolean a byte? That's 7 bits more than it needs!

2. Will there be a GC?

3. You cite 5 access modifiers that are the same as Java (public, private, static, protected, and none), but there are only 4 in java (public, private, protected, and none). I assume static is misplaced.

4. Why not make all methods explicitly virtual, and have them be final if they should not be overridden? Why must both the original method, and the overriding method be declared virtual, and not just the original? Comment: I would love for you to write the OO:What is OOP:Virtual Functions section. I also recommend referencing this url when you talk about inheritance.

5. Once fully implemented, will this be "write once, run anywhere", or will it have to be compiled with a different set of configurations for each calc?

6. Can you implement assertions?

7. I'm getting a sense of pass-by-value from your Dev Info:Value Passing/Returning section. Correct?
seana11 wrote:
Would it be possible to do this for GlassOS when AHelper releases a beta?

Can I suggest a name? OK, so OPIA sounds like the beginning of opium. So I go on Google Translate and translate "opium" to Malay. What does it turn up? "Candu." I think that is a cool name for a language.

Link:
http://translate.google.com/#en%7Cms%7Copium

EDIT: Z++!

I would much rather name it:
TIScript
or...
TI++ z80.

They're just ideas, I'm pretty sure better names could be though of.
carpo1 wrote:
seana11 wrote:
Would it be possible to do this for GlassOS when AHelper releases a beta?

Can I suggest a name? OK, so OPIA sounds like the beginning of opium. So I go on Google Translate and translate "opium" to Malay. What does it turn up? "Candu." I think that is a cool name for a language.

Link:
http://translate.google.com/#en%7Cms%7Copium

EDIT: Z++!

I would much rather name it:
TIScript
or...
TI++ z80.

They're just ideas, I'm pretty sure better names could be though of.


Z is already taken, http://en.wikipedia.org/wiki/Z_%28programming_language%29, and Z++ would look like a spinoff of that. TIScript sounds like something TI cooked up for their next calc. TI++ z80 is just a mishmash of related terms coupled with a ++ that you've only seen in the title of C++.
1. Booleans are a whole byte because it takes extra instructions to extract/insert just one bit, which adds more than just another byte to the program just for using it; so it actually saves space that way Smile ... However, if you have a table of flags, and the same code to access the whole thing, nothing stops the programmer from using bitmasks (e.g. "value & 0x100" for the 3rd order bit).

2. No Garbage collector will be built-in. However, given a bit of information about the platform, one just needs to write a simple "new" function that manages an area of memory, and reacts to conditions to do a GC. The idea is that (1) any kind of memory management can be plugged in, and (2) perhaps the platform has something for this already; in which case you just kinda plug it in.

3. Yes, static is not an "access-level" modifier; it must have been misplaced.

4. Actually, my original plan was to do it like Java, and have the compiler just KNOW when functions should be virtual because of the overridden definitions. The reason I opted for the C# style (a mix of the Java & C++ styles) is that requiring one to explicitly say which functions are virtual (1) discourages needless uses of that feature and (2) means that you can look at a class and know EXACTLY how it is to be represented in memory. Otherwise it would mean that, because some other magical class in some other magical file decided to override a certain parent function, that suddenly that function must be virtual -- without any changes having been made to the parent class! ... I was going to predictability, so that if someone makes something that is used in a zillion different compilation-units, there are not different versions of it coming from the same exact code. There will be ways to declare a class and say "Oh, but I already compiled it, and here's where to expect to find the class descriptor in memory ... but, use/inherit from it as if it were there, because it WILL be when you load that program into that environment".

5. If a program is completely generic, then the same compilation could run anywhere. However, it it uses anything platform-specific, then it will either have to be recompiled, or coded in such a way that it does it generically and expects some sort of external cues as to what's where etc. I want compiled programs to be able to be put onto the hardware, but reused with code, but also able to be picked back out of it. Maybe I (or someone) will make a tool for that later. The whole compilation process is going to be very modularized, and I'd like even the different "parts" of the resulting program to be gotten to if need be.

6. I have not thought about how assertions would be done. I imagine that you can assert something by putting in an "if" ... but if you are talking about invariants of a function (pre/post conditions coded in) ... well, that would still be a shorthand for about the same thing. ... Sorry, I don't think so. I don't want to add TOO much to the language. It's got great features from languages designed to run on MUCH more complex systems, and I am just bringing the necessities (plus some things that would be difficult if not impossible to do by hand; and maybe plus a couple nice things), but I'd like to avoid anything that adds more complexity to the language than there already is.

7. Yes, EVERYTHING is pass-by-value. Reference-types get passed by value, which just happens to mean that what they are referencing does not get copied directly. However, there are "ref" arguments, which basically get converted into pointers in order to "stand in" for the variables being referenced. The "new" and "val" arguments are not really kinds of arguments or forms of passing, but just little convenience shorthands that insert code at the start of the function to perform a static or dynamic copy (this way, reference types don't have to have all kinds of copy functions written, without going "thing = new Blarg(thing);" all over the place).

No Z++, because the letters and pluses and sharps etc. are already all over the place (and for all I know, somebody will do the same thing). No "TI" anything, because this stuff will run on any z80 machine at all (though heck yes, TI's are the focus). My approach is very different from something like Axe, which is so strongly tied to it's environment that it makes programming for it very convenient. My goal is to exploit the hardware (CPU) as cleanly as possible, with some mechanisms that are just NEEDED to do things like OOP in the first place, and maybe a couple other things so that the language can be more usable ... but the beauty is that libraries and includes can be written to make it work for anything, or even interface with already existing OS's, APPS, etc. without limit.

Honestly, I like "OPIA"; but I am likely to change it just because everybody keeps asking what I'm "going to call it" still; and fine, I can market my language a bit (most languages change names anyway). So far, I am liking Candu better than anything; but to be clear, unless/until I make an official decision, it is still OPIA (I've had some people just start calling it by suggested names which I have not said more than that I like the idea). ... anyway, lets keep the names coming! ... So far, I've gotta say that "candu" has favorable chances.
DJ_O wrote:
Wow I haven't see you in a long while.

I'm glad you are still around and working on a new language project.

I cannot criticize much on the project content/feasibility because I do not know much about that stuff, but from the experience I have in the TI community (I've been around for 9 years minus 4 days by now), I saw many language projects fail (or released before they work properly). Often, the language was too cryptic/hard to understand, the editor was a major PITA to use or the author was too ambitious for his programming/planning skills.

In Quigibo case, I recommended him when he talked to me about Axe to wait a little bit before releasing any Alpha version and announcing the project. I felt it was too early given what was done and today in the community, people won't take projects as seriously if they are announced in very early stages. Quigibo waited until a decent amount of commands before announcing Axe Parser and now look where the project is.

Another thing: I do not want to discourage you, but since Axe is rather easy to learn for BASIC programmers (the syntax is almost identical for certain programs), that it's programmed under the TI-BASIC editor, compiled on-calc and already very functional without much bugs, it will be very hard to compete against Axe language with a new language project. If you check http://www.omnimaga.org, Axe Parser sub-forum alone had 5634 posts in 5 months and if we include projects on that site, it's 6715 if I forgot nothing. This is 1343 a month. Even on Omnimaga, which is generally more open-minded towards every kind of project aimed towards game programming or game themselves, you would need to make a language that is either as easy, as reliable, as fast, as small (compiled) and that allows as much freedom as Axe Parser, otherwise your language would need to use a syntax very similar to popular computer languages to get a large audience.

See BBC Basic, for example: pretty solid language APP Benryves did there last year, but the hard-to-use on-calc editor killed its popularity.

Regardless, I hope this project goes towards the right direction and I wish you good luck. Smile


I think, DJ, you could be more constructive by NOT telling people their project will never match up to the projects on your forum.

that's just what it sounded like.
seana11 wrote:
But it can all be avoided by disallowing multiple-inheritance, thus eliminating unnecessary complexity.


Except multiple-inheritance is awesome and useful. Complexity is not a reason to eliminate a feature.

Static kind of sucks, it kills recursion, but understandable of sorts. I think it would be better to actually support memory allocation, though. A small malloc library that allocates/frees from a fixed-size block of RAM (perhaps an appvar or program created/reused when a program is launched that all OPIA programs can share?). And primitives should sit on the stack.

Being able to bypass the ctor is also ugly and should be removed. That shouldn't be allowed, and defeats the entire point of a constructor.

Quote:
- X = static Foo(...); // X references to a pre-constructed instance of Foo.


That is ugly and should be removed. Particularly the use of "static" as a keyword in this case. But the whole notion of having a "pre-constructed instance" is completely wrong and should be axed.

The "ref" keyword should go. It should be replaced with something like C#'s "out" keyword - whether or not it is a reference shouldn't matter to the developer.

Similarly, the "new" and "val" keywords *really* need to go. Automatic deep copy is pretty much impossible, shallow copy is full of hidden pitfalls and dangers, it makes it too easy to add way too much overhead, and it serves no real purpose.

Quote:
Functions can be declared within other functions and may access the local variables from the surrounding context ("closures"). Since these are always static addresses, there is no fancy mechanism needed (though that DOES mean that calling a closure which has escaped its parent function (e.g. by returning a reference) results in "undefined" behavior).


This makes closures pretty much useless - just get rid of them.

Quote:
- x = [bool,char=>byte]foo; // Using "byte foo(bool, char)" and not some other "foo".


Casts should require parens: "x = ([boo,char=>byte]) foo;"

Structs: They should either be purely a data container (no functions whatsoever), or they should be removed. This sort of partial-class thing is just weird and useless. I think you should remove structs entirely, there isn't a point to them.

Interfaces: If classes support multiple inheritance, so should interfaces.

"Enumerators" - too close to enums, use instead "Iterators"

Enums should be an unsigned byte or short - limiting to 127 is lame. Likewise, support allowing the values to be specified, and *ALWAYS* count sequentially (never do that 0, 3, 6... thing you propose)
When I say "multiple inheritance", I think of a class having multiple parents (rather than implementing interfaces); and that requires more overhead just to support. But that's what the interfaces are for (Java and C#'s solutions, and OPIA works the same). Interfaces can "extend" each other, but they don't "implement" each other because they themselves represent contracts that classes are to keep; but an implementing class can certainly implement both of something.

Static allocation has it's nastiness (especially recursion); but it also greatly simplifies how and where things are stored in the general case. OPIA has some big features, but I can be used without all the OOP and just writing something to do a simple task ... I didn't want to make needless use of the stack in every case. Instead, recursion-detection and liveness-analysis will have values saved in the stack when not doing so would cause collisions with recursion. The liveness analysis and value-tracing keep this from happening to more than just the vars that need it, and some vars can just stay there if they need to be put back ... I've gone over this many times, and there are strong cases for both. But I recognize that it requires extra checks and work that would otherwise come for free; and I claim it's a fair trade at the least.

The idea of there being allocation functions written is entirely the idea Smile ... but those would be coded and not built-in, and right now I am focused on what to build-in (though having that in mind is definitely an influence of other things). That is another reason references can be assigned to static things: perhaps someone uses allocation functions separately from the "new" mechanism, and just wants to assign to the resulting value (which happens to have been gotten dynamically within the function).

I didn't want variables to necessarily have to construct as they are declared; however, I could easily have the compiler complain when variables are used without having been assigned/initialized first (and that would certainly require the construction to happen first).

Assigning to a static allocation ... I was waiting for someone to say it was nasty Smile ... it kinda is. However, this kind of thing happens all the time in assembly. For example, with {char[] str = "blah";}, the "blah" would be statically stored in the program as a preconstructed instance of a string. Compare that to {= new char[]("blah")} or {= char[]("blah")}. The syntax IS a bit nasty, but that's also to provide that if you do it, it's on purpose. ASM programs hard-code "records" of information all the time, in which each entry COULD be represented as a struct (has a uniform format). That's what that's about.

C# provides both "ref" and "out" parameters, such that "ref" arguments are required to be assigned/init'd before passing them in, and "out" params are required to have a value before the function returns. I was going to have the same thing, but use "in" and "out"; but I figured it would be simpler to just let "ref" be for either and just not impose any rule for assignment (and leave "in" and "out" as valid identifiers, since they are nice names); but I can switch back.

As for "new" and "val" ... GONE. I was considering them in the first place because the compiler MIGHT have been able to spare an instruction or two if it could make the copy during the pass; but I found it better to do it in the function body, and at that point it was just a shorthand. ... Yeah, where deep copies are needed, those don't make it any simpler, and can paint a false picture.

Closures are there basically because function-pointers exist. For something like "void affectStuff([byte=>] how) { ... }", it would be nice if a calling function wants to define the "how" in terms of local variables, e.g. sorting a localized list. But it's not meant to escape. The static allocation of vars meant that allowing them was trivial, and per the example I just gave, can be useful. As for the "undefined behavior", such a closure would actually work just fine -- the "undefined" part of it is that perhaps it suddenly coexists with another call to the function it came from, and suddenly there are value collisions ... that's a bit impossible to track, unless I provide a mechanism to lift the "free variables" out of the parent context; which I think adds more complexity than is needed. The two options I consider are (1) disallow closures to be returned from a parent context if it makes reference to local vars (passing is okay, and useful), or (2) allow them to escape and call it "bad practice" ... hmm... 1 is sounding better, but perhaps if the programmer is careful, they can get away with it? That might cause more problems if they are wrong though... it's probably better to have the restriction so keep things "safe".

I agree, casts should be in parens (all others are). On a side note, I came across an interesting conundrum: is it terrible to allow function-pointers of different types to have the same name? That's would be outright madness for other types; but it would be for the same reason that functions can have the same names so long as they take different arguments. ... In THAT respect, the function-pointer "cast" is really meant to be more of an "Oh, I mean THIS function" statement to resolve ambiguity, so I might leave off the parens and distinguish it from a cast ... sorry, it's not really a cast.

Structs are indeed meant to just be straight-up data containers; however, allowing member functions really doesn't hurt it, and can be convenient. The only difference is that the member function takes a pointer for the "this" (which is more efficient than copying a whole struct to a function which ought to act on it directly, rather than a copy; otherwise it wouldn't be declared as a member function). However, this is why inheritance and virtual functions are NOT allowed within structs, because they contain NO reference to any kind of descriptor or anything but their actual contents. I said they might contain a reference descriptors to interfaces they implement, but that's wrong; a struct instance is of a very specific type (because of the lack of inheritance), so if one needs to be boxed into an interface instance, there is not question about what descriptor to use (which will never be part of a struct instance, but stored statically somewhere).

Iterators -- I really thought about that, too. It's a technical difference, because those don't have to "iterate over something" per se; they can generate or compute something in series, and "enumerate" fit what they did better ... But I agree, it's a bit too close to "enumeration". I was going to call them "processes" (because whatever it's doing, it's a "process" that takes steps ... but I didn't want to confuse that with the idea of a parallel thread of execution). ... on thing further: It MIGHT be nice to have a for-each loop that takes one of those (or something that HAS one of those) and goes through until it's done; but that requires some more thought and discussion and particulars about how it would interface, etc. ... There'd be no way for it to apply to arrays, unless the compiler could infer the size; and idk about always assuming that char[]s are strings with a null char at the end... but anyway, how is "process"?

I wanted enums to be strongly purposed for use with switches, but not limited to that. The reasoning for forcing the values like is because (1) it makes it incredibly efficient with switches (a big motivation for even having them in my language), and (2) I wanted to present enums are a pure "choice" mechanism such that the value is only thought of as one choice or the other; rather than ever being the result of a computation. I wanted to keep them abstract, and mainly as a "choice" mechanism. The user would never see the underlying values. I feel that forcing this would generate more efficient code in most cases; and otherwise you can always use a non-enum type. ... I did find a way to allow up to 255 values, though (switched perform best when there are only 43, require another instruction up to 86, and then a few more to handle more than that).

DOCUMENTATION UPDATED: http://tinyurl.com/z80opia
Quote:
I think, DJ, you could be more constructive by NOT telling people their project will never match up to the projects on your forum.

that's just what it sounded like.


Didn't sound like that to me. More of what I heard was a decent explanation about the pitfall of most language projects.
Yes, DJ had a point, which was what it would take to get a real following (and feedback is helpful). And I agree with the view that it would have to be either like AXE or like popular computer languages -- which is my approach, and why I think that this will add a richness of it's own to the community, much like AXE did; but not in direct competition. ... The funny thing is that the idea which sparked this a long time ago was to have something that would compile BASIC programs (or something very close to them) into assembly; and then at that point, we decided to just follow the rules of popular computer languages (mainly C++ at the time).

By the way Ashbad, Binder News wants to be involved in this directly, and he says you two have some experience with this kind of thing. If you (or anyone else) wants a big piece of this, email me at shkaboinka@gmail.com ... though, just saying, I'm probably going to do most/all the coding myself Smile
I might be able to help a little bit later on. Right now I'm booked though Razz

But I'll watch this thread regularly and provide any input for now. Wink
shkaboinka wrote:
I didn't want variables to necessarily have to construct as they are declared; however, I could easily have the compiler complain when variables are used without having been assigned/initialized first (and that would certainly require the construction to happen first).


If it has a "new" in front of it or parents after it or if '=' is involved in any way - it is a construction, not a deceleration.

Quote:
Assigning to a static allocation ... I was waiting for someone to say it was nasty Smile ... it kinda is. However, this kind of thing happens all the time in assembly. For example, with {char[] str = "blah";}, the "blah" would be statically stored in the program as a preconstructed instance of a string. Compare that to {= new char[]("blah")} or {= char[]("blah")}. The syntax IS a bit nasty, but that's also to provide that if you do it, it's on purpose. ASM programs hard-code "records" of information all the time, in which each entry COULD be represented as a struct (has a uniform format). That's what that's about.


"static foo = new foo();" <- normal, makes sense

vs.

"foo = static foo()" <- wtf is this?

Also, you are talking about storing constant data statically, which is fine and expected (and requires no keyword). I'm referring to the messed up syntax above.

Quote:
Closures are there basically because function-pointers exist. For something like "void affectStuff([byte=>] how) { ... }", it would be nice if a calling function wants to define the "how" in terms of local variables, e.g. sorting a localized list. But it's not meant to escape. The static allocation of vars meant that allowing them was trivial, and per the example I just gave, can be useful. As for the "undefined behavior", such a closure would actually work just fine -- the "undefined" part of it is that perhaps it suddenly coexists with another call to the function it came from, and suddenly there are value collisions ... that's a bit impossible to track, unless I provide a mechanism to lift the "free variables" out of the parent context; which I think adds more complexity than is needed. The two options I consider are (1) disallow closures to be returned from a parent context if it makes reference to local vars (passing is okay, and useful), or (2) allow them to escape and call it "bad practice" ... hmm... 1 is sounding better, but perhaps if the programmer is careful, they can get away with it? That might cause more problems if they are wrong though... it's probably better to have the restriction so keep things "safe".


The problem is you are introducing a half-complete construct. If you can't do it right due to technical limitations, *DON'T* add it to the language yet.

Get rid of closures until they are *actually* closures. C/C++ has been fine without them.

Quote:
I agree, casts should be in parens (all others are). On a side note, I came across an interesting conundrum: is it terrible to allow function-pointers of different types to have the same name? That's would be outright madness for other types; but it would be for the same reason that functions can have the same names so long as they take different arguments. ... In THAT respect, the function-pointer "cast" is really meant to be more of an "Oh, I mean THIS function" statement to resolve ambiguity, so I might leave off the parens and distinguish it from a cast ... sorry, it's not really a cast.


Definitely don't do that.

Quote:
Structs are indeed meant to just be straight-up data containers; however, allowing member functions really doesn't hurt it, and can be convenient. The only difference is that the member function takes a pointer for the "this" (which is more efficient than copying a whole struct to a function which ought to act on it directly, rather than a copy; otherwise it wouldn't be declared as a member function). However, this is why inheritance and virtual functions are NOT allowed within structs, because they contain NO reference to any kind of descriptor or anything but their actual contents. I said they might contain a reference descriptors to interfaces they implement, but that's wrong; a struct instance is of a very specific type (because of the lack of inheritance), so if one needs to be boxed into an interface instance, there is not question about what descriptor to use (which will never be part of a struct instance, but stored statically somewhere).


This should be a compiler optimization. Empty class with no virtuals? It becomes an implicit struct. There should be no reason to require a developer to tell you this.

Quote:
Iterators -- I really thought about that, too. It's a technical difference, because those don't have to "iterate over something" per se; they can generate or compute something in series, and "enumerate" fit what they did better ... But I agree, it's a bit too close to "enumeration". I was going to call them "processes" (because whatever it's doing, it's a "process" that takes steps ... but I didn't want to confuse that with the idea of a parallel thread of execution). ... on thing further: It MIGHT be nice to have a for-each loop that takes one of those (or something that HAS one of those) and goes through until it's done; but that requires some more thought and discussion and particulars about how it would interface, etc. ... There'd be no way for it to apply to arrays, unless the compiler could infer the size; and idk about always assuming that char[]s are strings with a null char at the end... but anyway, how is "process"?


Just call them iterators - everyone else does.
Foo f; // Not constructed
...(do stuff, but f is never touched)...
f = Foo(...); // constructed
Foo g = f; // no problem!

If you replace "Foo" and "Foo(...)" with "byte" and "5", this is fine and happens all the time; though it's always good practice to initialize variables right off, most languages don't require it. I am just extending the idea to non-primitives; though allowing construction at any point DOES mean that things can be reconstructed ... meh.

static f = ... // f is static and shared across all instance calls
f = static ... // f is NOT static, but let's not actually construct the instance during runtime.

...So there is a difference. HOWEVER: (1) The compiler is likely to just use static instances anyway when it's more efficient; (2) The $$ operator already does this (I wanted a way to refer to strictly static instances, but didn't realize I was being redundant!). Generally, the user should just trust the compiler; but using $ and $$ when it's imperative to guarantee certain things usually just generates an error if it cannot, so it is more of an assertion (though sometimes it can influence the compiler's decision).

It is true that they are not complete "closures" in that sense; so the problem is saying I provide closures and then not providing what one thinks of as a closure. However, "Inner functions" or "Local function definitions" are going to be allowed, and are useful. Don't think about them as how you think closures might be useful (like in JavaScript), because we are not talking about closures (that's my bad for calling them that if they were not true closures in every sense). The rules with them though are that (1) they cannot be returned from declaring contexts in which they access local vars (not meant to be used as "closures" in that sense), (2) they cannot be returned if passed into that context (hey, the calling context still has it; and this prevents a loophole). .. anyway, it's trivial to allow "local functions", so I will. They don't have to be used, and this will guarantee that they cannot be abused.

I won't allow variables to have the same names, even if this could be done right just for function-pointers (it's just too nasty, and can cause confusion). I have also removed the ability to "specify which function" syntax (which I was saying technically is not a "cast"), since it should always be clear from context which is meant because of the strict type system (though is some obscure situation somehow needed it, I will allow a traditional-style cast).

Virtuals aside, the difference is that structs are always copied when assigned or passed, and classes are not; but passing references is more efficient anyway, and I suppose a copy function is not hard to write. ... I will probably drop the structs, just have it that final instances are stored directly (rather than as references), final-classes without virtuals which implement an interface will not need to store descritpor references internally. ... I'd like to hear a BIT more discussion on this before I make those changes (CALLS FOR DISCUSSION).

....Fine, iterators. There is ambiguity with "process" anyway, and it's better to error calling the iterators, because they basically will be used just for that ... though one could write one that just does a step-by-step thing without spitting out any values; but I'd rather have THAT usage be confusing than have their use as iterators be confusing. (CALLS FOR DISCUSSION ABOUT A FOR-EACH MECHANISM).

...One other thing: Honestly, I hate having $ AND $$. Perhaps I should JUST have $, and have it be a strong assertion like $$ currently is. The compiler can be smart about what $ currently requires anyway. I can have $ be put on expressions to say it must resolve to a known value, that functions/calls must do the same, and that flow-control (and contents) must be completely interpreted. It assert that just the construct itself is removed, put it on the expression ( if($(a < b)) { ... }) ... How does that sound? (CALLS FOR DISCUSSION)

FILE UPDATED! (again)
  
Register to Join the Conversation
Have your own thoughts to add to this or any other topic? Want to ask a question, offer a suggestion, share your own programs and projects, upload a file to the file archives, get help with calculator and computer programming, or simply chat with like-minded coders and tech and calculator enthusiasts via the site-wide AJAX SAX widget? Registration for a free Cemetech account only takes a minute.

» Go to Registration page
» Goto page Previous  1, 2, 3, ... 20, 21, 22  Next
» View previous topic :: View next topic  
Page 2 of 22
» All times are UTC - 5 Hours
 
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum

 

Advertisement