This is very sad news indeed, shkaboinka! Personally, I think that option (1) would be the best alternative. I don't think that anyone else would have the same drive as you to complete the project, and I don't think that anyone else would be as keen on going in the direction you were going with the language. I think someone else might end up making a final language that would be less object-oriented, for example. I do hope that things look up for you soon though, time-wise!
This is definitely sad news, though I wish the best for you in the meantime! I had a fun time following the progress of this project, and I myself learned a bit through watching your process. Smile
After some more thought, I've decided that I can keep the project open, but I can only work on it very intermittently. It just cannot be a priority or anything I can expect to "get done" in any specific time frame.

With life is picking up the pace rapidly and personal projects like this needing to take a back seat, it may be weeks or months at a time where no progress can be made (especially since the nature of these things requires significant amounts of time to (re)gain momentum).

As I eventually hit different "checkpoints", I will still post them here.

I'll finish this someday.
Some changes I want to make (which have been floating around on a sticky-note for a few weeks):

Remove Cofuncs (cofunctions)

I feel that explicitly distinguishing cofunctions from functions adds unnecessary complexity to the language, because the compiler can determine whether a function needs to store state or not: Any function which "yields" values will store state between calls into a shared object (these funcs start with "goto X", where X is updated with each yield). Functions declared within structs will store any needed state (including the "goto X") as part of each struct instance (this was going to be the effect of declaring a cofunc within a struct). No functionality (nor background implementation) would be lost with this change. This would allow lambdas to carry state if coded with "yields" (and I'll consider the option of declaring "new" lambdas or "new" other stateful-functions). A cofunc with a "members" body (in addition to the function-body) would be replaced by a struct containing the (separately named) function.

Down-casting

If some struct B contains an instance of some struct A, then A can be cast to type B (the address of the "B" is based on the address of the "A", and the offset at which the A is stored in the B). I'll have to refine this a bit for cases where one struct contains TWO instances of another.

If struct A has a function defined within it (thus causing A to store a pointer to that function, so that it can be replaced with a reference to a different function), and struct B contains an instance of A, and B assigns a different function to the inner A when it constructs, then that function automatically down-casts the A (the "this" of the function) to a B, thus providing access to all of B's members within the A function. ... I can either disallow such member functions from being assigned to method pointers (for type safety), or not worry about type-safety for the same reason that there is no built-in error checking (out of bounds, arithmetic overflows, etc.).

Change switch-variable syntax

Code:
//Originally:
switch {A,B,C} mySwVar = A; // type, name, init

// This pattern is more consistent to how structs/funcs/switches are declared:
switch mySwVar {A,B,C} = A;
Everything makes sense to me there except for the intricacies of down-casting that you discuss. It seems to me that if B assigns a different function pointer to one of A's functions, then the resulting struct is neither an A nor a B, no?
Typically, methods are declared outside of their struct ("non-virtual"), acting as extension methods apart from the struct type.

Internal ("virtual") methods cause the struct to contain a pointer to the method, thus allowing a different methods to be assigned to a particular instances.

The key here difference between Antelope and traditional OOP is that "virtual" methods are stored directly within each instance (as with other members), rather than in a shared method table that each instance has a reference to (and thus the address of a table corresponds exactly with a class-type) (traditional OOP).

This design promotes the use of interfaces for dynamic objects (which uses essentially the same overhead costs but is a more flexible/powerful model than traditional inheritance), while providing a cleaner syntax for declaring member function-pointers as methods with default bodies.

If I want to worry about type-safety (which would still have holes in it due to the lack of any built-in runtime error checking), then I can allow these methods to be assigned from any method-pointer of the correct type, but not allow the reverse operation (else a method for an A as a B could be called on any A).
I belive you will finish this someday. It is a cool project after all!

BTW: http://www.dev.gd/20130122-the-joys-of-having-a-forever-project.html Wink
Some changes I've made:

Removed cofuncs. Instead, functions can store state (i.e. if it "yields" values) globally as needed (shared across all calls), or within the struct it is declared in (state is per instance).

Removed separate syntax for constructors/destructors. Method syntax is now uniform for everything:

func Foo.blah(...) { ... } // regular Foo method
func Foo.new(...) { ... } // Foo constructor
func Foo.+(...) { ... } // Foo + operator

Including the datatype for "new" instances is no longer optional

[]int a = new [8]int; // CORRECT
[]int a = new [8]; // INCORRECT!
a := new [8]int; // (This is ok though)

New constructor rules: (1) Initializer lists can now be used without a constructor call, (2) No "all members" constructor is provided automatically if no constructor is given (use initializer lists instead), (3) No need to declare constructors within the struct (but if you do, the struct contains a pointer to the constructor):

struct Point { int x, y, z; }
func Point.new() { x = y = z = 0; }
Point p1(); // 0, 0, 0
Point p2() { y = 2 }; // 0, 2, 0
Point p3 { x =1, y = 2, z = 3 };
p4 := Point() { ... } // Just to show both syntaxes

Changed namespace syntax from a top-of-the-file-affecting-everything-in-the-file statement to a block statement. This allows multiple namespaces within the same file, namespaces can be nested, and "using" statements now apply to the namespace they are within (or to the whole file if declared outside of a namespace):

using Foo.Bar; // applies to whole file
namespace Blah { using A.B; ... } // using A.B just for Blah
namespace Next { namespace Nested { ... } } // Next.Nested
namespace Next.Inner { ... } // Next.Inner (do not HAVE to "nest")
EDIT: I am totally redoing this post (it was too complex)

Should I get rid of (explicit) pointers?

The point is to keep the language simple and clean (compare Java/C# to C++) by having all user-defined-types (structs) stored by reference (as "automatic pointers"). However, the "by value" operator (&) can be used to a store "reference type" by value (like the reverse of a pointer). The value would still be treated like a reference, it would just be stored by value.

Code:
struct Point { int x, y; }
Point p1; // stored by reference
&Point p2; // stored by value
p1 = p2; // p1 points to p2
p2 = p1; // NOT ALLOWED: p2 is fixed in memory (not a reference)


The syntax for passing by reference would change (e.g. "ref type arg", which I think this is clearer anyway).

The == operator would test whether reference types point to the same thing (reference equality).

EDIT: The grammar has been updated to reflect this. However, the syntax is now "val Point p" rather than "&Point p", and array indexing is now on the right side of Types (int[] rather than []int).
Problem: Overloading the == operator may cause inconsistency with interfaces:

Code:
struct S { int x; }
func S.==(S other) => (x == other.x);

iface I { ... } // Assume S implements I
func eq(I i, S s) => (i == s);

S s1 { x = 5};
S s2 { x = 5 };

(s1 == s2) // true, because s1.x == s2.x
eq(s1, s2) // false, because s1 and s2 are not the same S!

Possible solutions:

1) Don't allow the == or != operators to be overloaded. Instead, you can define an "equals" method. The meaning of == would always be clear (reference equality), but "equals" methods would have to be used for custom equality. Down side: being able to overload most operators, but still having to use an "equals" method.

2) Have an "is" operator for reference equality which cannot be overloaded. Interfaces would get "is" for free, but would have to require an == method to allow for custom "equality". By default, == and "is" mean the same thing, but you still choose between "same reference" or "equality" connotations. Down side: change in convention from using == to using "is" (but perhaps the meaning is clearer?).
Does anyone see any flaws in what my last two posts suggest?
<EDIT: I found a simpler syntax, so I've just revised this whole post>

A trait a unit of code that can be "copied" into a class or another trait (this is not inheritance, as it happens statically at compile-time). If multiple traits with overlapping method-names are used together, then you must label which methods to use from which trait. You can assign alternate methods to use for a trait (=) or use a method of a trait under a different name (->).

Code:
trait T1 {
  func A() { "A1"; }
  func B() { "B1"; }
  func C() { "C1"; }
}

trait T2 {
  func A() { "A2"; }
  func B() { "B2"; }
}

class Imp : T1(A,B), T2(A->C,B=C) {
  // Imp gets these methods from T1 and T2:
  // func A() { "A1"; } // hence T1(A)
  // func B() { "B1"; } // hence T1(B)
  // func C() { "A2"; } // hance T2(A->C)
}

// (new T2(Imp)).A == T1.A // hence T1(A)
// (new T2(Imp)).B == C // hence T2(B=C)

Traits can provide empty methods to indicate that the method must be provided from somewhere else (the class (or trait) using the trait, or from one of the other traits being used by the class (or trait)). If a class (or trait) already has a method that one of its traits provides (or requires), then that method is always used instead. This also means that if one trait uses another, it can redefine a method as an empty method to "throw out" the one provided by the first trait and make it a requirement of the new trait.

Traits also replace interfaces: a trait with all empty methods (e.g. func(int):bool; //<--semicolon) works the same way as an interface with the same methods would. Additionally, a class can "have" a trait without having to say so, so long as it has the required methods.
Some thoughts on passing function arguments

Function arguments may have default values, which must come last. For each default value, there will be a hard-coded assignment in the function, followed by an alternate entry point for when the default value is not used. This allows several different "versions" of a function to exist, each corresponding to a different entry point and being a valid target for function pointers with the same (reduced/expanded) argument list:

Code:
func F(int a, int b=4, int c=5) { ... }
F(1,2,3); // 1,2,3
F(1,2);   // 1,2,5
F(1);     // 1,4,5
func(int,int,int) f3 = F; f3(1,2,3);
func(int,int)     f2 = F; f2(1,2);
func(int)         f1 = F; f1(1);

Function-pointers may point to functions with fewer arguments and more return values (so long as the "common" values are of the same type), such that the "extra" values are just thrown out/ignored:

Code:
func G(int x):int { ... }
func(int,int) ptr = G; // 2nd arg of ptr and return value of G will be ignored

Thoughts on Low-level implementation:

Storing/passing of arguments will rely firstly on registers, and then (after there is "no more room") on other means (the stack or static storage). The pattern of register assignment for a function with N arguments will be the same for a function with N+1 arguments, plus another register (that is, choosing registers for later arguments depends on those chosen for previous arguments, but not vice versa). This will allow for the previously described behaviors to work properly.

When passing values to a function, the last arguments are assigned first, and the first assigned last. This not only allows for the "hard-coded default arguments" behavior, but also allows the A and HL registers to be the first register options (they must be loaded last, since static values cannot be directly loaded into other registers). This also conforms to using just A or HL for one-argument functions (highly desirable).

Note that larger-than-word-size values will either be passed by address (fits in a word), or split across multiple registers / storage locations.

I've determined that the cleanest register-selection algorithm would be to set up a "spectrum" of registers, and have byte-sized arguments start at one end, and word-sized ones start at the other:

A B C D E H L

Examples:

byte byte byte : A B C
word word word : HL DE BC
word byte word : HL A DE
byte byte word : A B HL

Notice that this pattern is the same if you add/remove arguments, and reducing the likelihood of register "size-collissions" -- That is, B and BC would not both be used unless/until there are no more registers to use anyway.

Note: Not all registers can (or should) be used before resorting to other means, because then there is nothing left for the function to work with, and some of those values will likely need to be moved elsewhere anyway (static address or onto the stack, etc.)

Values will sometimes need to be saved/retrieved before/after function-calls to prevent them from being overridden (for example, recursion would not work properly otherwise). However, "live-ness" analysis and inter-functional-dependencies will be used to determine when this is ACTUALLY necessary, and for which values.

"Other Means?"

Once all registers are used up, I can either pass variables on the stack (values are already saved and recursion-friendly -- No need to push them on later!), or I can load them into static addresses predetermined per function per argument (values can be loaded directly into instructions as immediate values -- saves space, very efficient), or load them into static addresses per function-CALLER (some arguments can be hard-coded in and take ZERO runtime processing to "pass", and sometimes even be reused!). There are definite down-sides and gains to each. Not using that stack means that values would have to sometimes be pushed on later (analysis can determine when this is not needed). Both the stack option and the caller-storage option would allow the index (IX) register to be used to access arguments directly by index.

In either case, I need to pick one scheme to use so I can have consistent low-level compatibility among functions.

Methods:

Methods need a "this" argument (the "receiver"). It makes sense to reserve the IX register for this, since it is likely to be efficient to be able to access members of an object (class instance) by offset (is it really more efficient than manually referencing and loading each as needed though? needs research/experiment) -- especially with the use of interfaces (now called "traits"), where the nature of the underlying object is not always known ahead of time (dynamic).

Reserving the IX register would also allow non-method functions to be used as methods for interfaces (traits) (e.g. a class can fulfill an interface requirement via a STATIC class function), and perhaps make the distinction between static/nonstatic methods unnecessary, because: (1) The compiler will detect which "methods" do not access instance members, and therefor not pass a "this" when calling them, and (2) If such an "apparently static" "method" is used with an interface, passing the "this" does no harm, since it will just be ignored and does not / cannot affect other arguments passed.

(Side note: This would allow classes themselves (apart from separate instances) to be treated as objects and qualify as valid implementations of interfaces. For this though, I may opt to replace "static" values with the notion of a "class object", or an embedded singleton object which takes on the name of the class and acts as a "static" container, but is also an object in its own rite, meaning that it can be referenced directly, passed around, inherit from other classes, and allow for trait composition at the "static" level).

On the other hand, reserving the IX register for "this" also means (1) that it cannot be used for arguments which might make better use of it, such as (2) storing arguments per caller-site, which would otherwise be able to be referenced by offset (IX could be reserved for this INSTEAD?).

I cannot opt to use IY, because the TIOS uses that to store flags, and does NOT back it up elsewhere (thank you for ruining that option for a whole language! arg!!!)
The language currently has the option of constructing objects dynamically ("new") or statically (no "new"):

Code:
a := new Foo(someArg); // a "new" Foo is dynamically allocated
b := Foo(someArg); // this Foo is allocated statically are referenced directly

However, I am considering always requiring the "new", but letting the compiler determine whether to allocate dynamically, on the stack, or statically.

Edit: A manual distinction is indeed necessary, as sometimes you want new instances to persist after the current function finishes, and the compiler cannot always tell when that is.
(Edit: the ideas of this post are also better formed in my next next post)
I'll really have to try that one day. It seems very complete already in the description you give us.
(Edit: Removed this to-do list; see next post instead)
"Final" thoughts on currently outstanding language changes:

In summary, I am looking at replacing the preprocessing (#) stuff with a package-system, C#-ish user-defined attributes with controlled access/manipulation of the syntax tree, compiler variables, etc. Some of these changes are large (surprise!), but I am presenting them after doing a lot of thinking about how to do things and what is necessary. Also, a few smaller changes / loose ends I've tied down.

Package System:

A package consists of (multiple) source files (.alp) and one package (.apg) file, all of which must start with the same package declaration ("package Foo;"). Packages are "compiled" into a single unit containing meta-data about the code (compiled syntax tree, and assembly code to whatever extent possible). The package file lists other files (or directories) that are included in the package, and may contain declarations that apply to the entire package.

"Importing" a package within a file gives a that file access to the code in that package. Importing within a package file does so for all files in the package. The compiler only loads (compiles, or reads the already compiled data for) a package once no matter how many times it is "imported". The code of an imported package remains in the "namespace" of the package (e.g. "import Foo; ... new Foo.Bar()), but specific contents of a package can also be imported (e.g. "import Foo.Bar; ... new Bar()).

Packages (and their contents) must reside in predictable locations (nearby directories, or in a (configurable?) default directory that the compiler always checks, or in some directory specified by a package as a place to look for that package's imports). If a file is part of a package, then attempting to compile it will cause the compiler to look for the package file (nearby) and compile the package.

"Inner" packages (packages "within" packages) depend on (automatically "import") the parent package, and must be referenced inside the package-file of the parent package. For example, package "Foo.Bar" refers to package "Bar" inside of package "Foo". "Foo" can be imported without "Bar" ever being imported, but importing "Foo.Bar" also imports "Foo". Additionally, if "Foo" is imported, then "Bar" will automatically be imported if any reference to "Foo.Bar" is made (this is why the parent package must be "aware" of inner packages).

Packages are first-class objects. That is, all functions are "methods" of the package they reside in (if not in a class or other object), packages may inherit from classes and compose ("mix in") traits to bring in code (brings code directly into the "namespace" of the package), may contain constructors for the whole package, and even be passed around as objects. (Packages can not be inherited FROM, though; they are not classes). This works without extra overhead, because the IX register will be reserved just for the "this" of methods, and the "methods" of a package (or of any singleton object) can ignore the "this" parameter because the "members" can already be accessed directly (because they are "static", or only exist in one place).

Attributes:

An attribute is like a compile-time class, which can be used to attach meta-data to elements of code (variables, classes, objects, traits, functions, packages, etc.). This meta-data is exposed in the syntax-trees of compiled code, and thus can be accessed by anything that can interface with the compiler or with compiled packages.

Attributes may also contain compile-time ("interpreted only") initialization code that interface directly with the compiler (e.g. get/set compiler properties) and the code that it is attached to (e.g. prepend other code). This init code will be normal code as found anywhere else in the program, but will take a "this" parameter which maps to the syntax-tree item that it is attached to.

For example, different attributes may be used to flag data to use for a program "icon" and functions to handle "update" and "render" parts of a game kernel; and then another attribute (or a package constructor) may contain initialization code to search for items with these attributes and, if found, insert a sort of "program header" containing these items (by inserting the "code" for it at the start of the main function). Or perhaps attributes can be made to implement pre-/post-conditions.

I'd have to decide what operations to allow (inserting code before/after functions, other modifications to the syntax tree, etc.), and provide some "Compiler" object to interface with the compiler (get properties given to the compiler, issue compile-time errors, etc.). Attributes themselves would have built-in methods to query for which items they are used on, etc. I may also want to add methods for basic file I/O, prompts, etc. (for example, read a program "icon" from a file, or interface with an external program which could be included as a resource file in a package to begin with).

After parsing all the source code and builds a syntax tree, but before "compiling" it all, the compiler will then go through and initialize all the attributes on things, which in turn may make other modifications to the syntax tree. (This is why "compiled" packages might not always be able to contain the "fully" compiled assembly code for everything, because it may depend on how it is used when it gets imported).

Code:
// Attribute declarations:
attr ProgramIcon; // Has no properties on it
attr MyAttrib(myProp) { ... } // has one property and init code

[ProgramIcon] // Attaching a "ProgramIcon" to myIcon
bool[10,10] myIcon = { { ... }, { ... } , ... };

[MyAttrib(true)] // Attach a "MyAttrib" to someFunc,
func someFunc() { ... } // will init with myProp = true;

[MyAttrib(myProp=true)] // Again, but with named argument
func someOtherFunc() { ... }

Package-level "if":

The "#if" preprocessor-directive would be replaced with a regular "if" statement which can appear at the package level, with the condition using normal code (but most likely referring to things related to attributes or compiler properties as exposed through some compile-time-only "Compiler object"), with semantics that the containing code is only to be included if the condition is true (or removed/ignored if false). A similar "if" inside a function already carries the same semantics:

Code:
package Blah; // Just to illustrate package-level context
if(Compiler.Device == TI83PLUS) {
   func FooFor83P() { ... }
   class ClassFor83P { ... }
}
else {
   func FooOtherwise() { ... }
   ...
}

func Something() {
   if(Compiler.Device ... ) // this would normally be
   ... // evaluated at compile-time anyway (same semantics)
}

This feature would allow items to be removed/added from the syntax tree dynamically without relying on a preprocessor (another built in "language"), and without requiring me to provide "Compiler methods" to make such modifications otherwise (because then I'd have to restrict when/where that code could be used).

Note: I still have to decide whether these get evaluated before or after attribute initializations, or if they can "cycle" (and how).

EDIT: There are two kinds of "package-level" ifs: those that can be handled in one static pass (as above), and those that depend on type-parameters and must be handled per usage (below):

Code:
class Foo<T> { // Foo is parameterized on some type "T"
   int presentInFooForAllTs; // Foo has this for any T
   if(something about T) {
      int OnlyForFomeTs; // Foo only has this when stated
   }
   if(something Not About T) {
      int HereOrNotButSameForAll // Whether Foo has this depends on the condition, but is the same for any T
   }
}

Situations like this^ (where members of something depend on usage) cause inconsistencies which make the "wild-card" (Foo<?>) behavior (described below) difficult or impossible. I will have to decide whether to disallow such inconsistencies to exist (e.g. every "branch" of those "ifs" must result in something equivalent), or find some other way to limit/modify the "wild-card" behavior to work with/around/within such inconsistencies.

Classes support single-inheritance:

Code:
class Base {
   func Blah() { ... } // non-virtual, called directly
  vfunc Foo() { ... } // virtual (stored in vtable)
  vfunc Empty(); // "pure virtual" / "abstract"
}

class Sub : Base { // Sub inherits from Base
  vfunc Foo() { ... } // Override Base definition of Foo
  vfunc Empty() { ... } // Provide implementation for Empty
  val Bar b { vfunc Inner(..){..} } // val members (stored by value) can have overrides at declaration, with direct access to the containing class (Inner has direct access to all of Sub).
}
...
Base b = someSub; b.Foo(); // calls Sub.Foo

Other small changes:

Code:
// Overloading the [] operator (presence of return-type):
func [](int a,..,z,val) { ... } // (set) this[a,..,z] = val;
func [](int a,..,z):int { ... } // (get) this[a,..,z];

asm { // The new "asm" command (no #preprocessor):
   ld a,$(someVarFromCode); // instructions end with ;
   call foo; // same syntax for comments as elsewhere
}

// Enums can be "of" any type with values of that type:
enum IntValues { A = -1, B = 2, C = 300 } // infer type as int
enum<Foo> FooValues { A(1), B(2), C(3) } // or A = Foo(3), etc.

// Equality operators == and "is":
A == B; // Compare primitive values or references (overridable)
A is B; // Test if A is an instance of B (not overridable)
trait T { func ==(...); } // allowed in traits

Code:
// Singleton objects:
object Foo { // Like "class", but "Foo" is also the only instance
   func new(...) { ... } // Objects and Packages can have constructors (only called if the entity is ever used).
}

class Foo { ... } // "object Foo" holds the "static" members of Foo, but is also an object in of itself (can inherit, etc.) Foo can implement traits by virtue of functions in "object Foo"

// Type-parameters: wild-cards, contravariance/covariance:
List<Foo> foos; // Compiler makes a version of "List" code for "Foo"s
List<?> things; // Version of "List" with extra references to code
things = foos; // Now things holds a reference to foos AND the Foo code
List<in Foo> // Can reference a list of any one type that Foo inherits from
List<out Foo> // Can reference list of any one type that inherits from Foo


Less important features that I may consider later on:

- Ability to test exec-point in yieldy funcs (something akin to "MoveNext" and "Current" in C# enumerators)

- A "default value" implementation (default(Foo) is a "Foo" will all "zero" values / nulls; for use with templated/generic types)

- Explicit non-null types (NullableType, NonNullable!), and nullable equivalent (Blah, NullableBlah?)

- "Trust me that it's not-null" operator: Foo!! (error if compiler can prove you wrong)

- Var-args implementation? (use the stack, or implement as array?)

- Class literals as a short-hand for calling an "Add" or "Append" method auotmatically:
new Dict<string,int> { { "Hello", 5 }, { "Hi", 6 } }; // Dict.Add(string,int)

- Literals for other things (regex, floating-point), each require a class implementation to exist for it
Not being able to overload == is sad to me (of course, overloading it can sometimes cause strange results). It seems that if I have a class Point3D, and I want to test two of them for equality, it'd make more sense to just do p1 == p2 than p1.X == p2.X && p1.Y == p2.Y && p1.Z == p2.Z. Though I suppose p1.Equals(p2) or Point3D.Equals(p1, p2) isn't too too bad. It would just be nice.

As for the attributes stuff, I'm a fan.

Meanwhile, when do we get a functioning compiler?
merthsoft wrote:
Not being able to overload == is sad to me ... It'd make more sense to just do p1 == p2 than p1.X == p2.X && p1.Y == p2.Y && p1.Z == p2.Z. Though I suppose p1.Equals(p2) or Point3D.Equals(p1, p2) isn't too too bad. It would just be nice.
EDIT: I've decided to allow == to be overridden (also edited post above). Default behavior compares primitive values or references otherwise. For traits (interfaces), the default is reference equality, unless it is defined in the interface, and then it is whatever it is for the underlying values that implement it. I'll have to provide a "RefEquals(A,B)" (maybe also an "Any" type, and then define function RefEquals(Any A,B)).

merthsoft wrote:
As for the attributes stuff, I'm a fan.
I was afraid they might scare people; but I'm glad you see my case for them as good Smile

merthsoft wrote:
Meanwhile, when do we get a functioning compiler?
Not any time soon, but I still think I'll get there eventually. Thanks for the continued discussion, though Smile
  
Register to Join the Conversation
Have your own thoughts to add to this or any other topic? Want to ask a question, offer a suggestion, share your own programs and projects, upload a file to the file archives, get help with calculator and computer programming, or simply chat with like-minded coders and tech and calculator enthusiasts via the site-wide AJAX SAX widget? Registration for a free Cemetech account only takes a minute.

» Go to Registration page
» Goto page Previous  1, 2, 3 ... 19, 20, 21, 22  Next
» View previous topic :: View next topic  
Page 20 of 22
» All times are UTC - 5 Hours
 
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum

 

Advertisement