People are still programming in BASIC and hybrid BASIC on their graphing calculators. There's even still need to support the 83 and 85 lines. The newest generation of CELTIC is here and neat. Some students are restricted from using a web-based editor, and some are restricted from downloading an editor.

So, all this to say, there's at least some value in thinking about what a modern BASIC IDE would like and act like. I'd like to use this as a general dumping ground and discussion area for what would work.

Here's my ideas so far:
1) Should work with existing XML tokens files. There's a long history of support for these, and people may even have some of their own.
2) Should work cross-platform and on the web, ideally with the same UX(I e been looking into blazor and uno to this end).
3) Calculators: 73, 82, 83, 85, 8x, and raw bin export
4) Support for variable formats outside of programs, e.g. numbers, matrices, etc.
5) Tools for manipulating data, like image editors, sprite editors, and color pickers:

6) More technical: I would want to build a library to make it easy to manipulate TI variables, and release it independently. This would work, for example, as a nuget package anyone C# dev could reference. Code should be stored in the TI toolkit repo.

I want to hear from the community though--especially from active BASIC and hybrid BASIC developers. What would you want/need in a BASIC editor to make it complete?
Thanks for making a topic for this. Conversations have abounded on various Discord servers and voice chats, but they're not too meaningful just floating in the ether of peoples' brains.

My own two cents on each of your points, as a semi-active BASIC and Toolkit dev:

1) The TI Toolkit token sheets are kept in a "new" XML/JSON format, with tools to translate to TokenIDE's format (iirc this is a two-way street; I should probably remember my own code better). The main difference is versioning: the Toolkit sheet is a one-and-done file for each line of calculators, with tags for versioning, while TokenIDE uses a distinct sheet for each sufficiently-distinct model. Both approaches have their pros and cons.

Furthermore, neither contains everything an IDE might need (e.g. command syntax information, which is also being currently consolidated in nice formats for things like the TI Toolkit Bot), so the chosen format should befit expansion to contain it, or such information should be relegated to separate files, but with a similar enough format to be able to reuse code and be just as "readable" as the main token info.

2) - 4) speak for themselves, yes to all of it

5) The hybrid devs could speak better to what sorts of tools they tend to need, though making it easy to add or contribute new tools is paramount, perhaps through some sort of add-on system (ideally a bit more sophisticated and interoperable than TokenIDE's ability to link an executable to a button).

6) Does that mean we're getting a third member of the tivars_lib family? The more the merrier, so long as there aren't too many practical differences between them. The choice between tivars_lib_cpp, tivars_lib_py, and the new secret third thing should just be a matter of which language the user wants/needs to use. And it doesn't need to be in the Toolkit org, though we'd be happy to have it there.

I'd very much like to continue to assist in the making of this, however it happens to proceed, as I'm sure would many other folks here who haven't left BASIC behind. I'll leave the essay on actual editor behavior ideas to iPhoenix.
Quote:
Does that mean we're getting a third member of the tivars_lib family? The more the merrier, so long as there aren't too many practical differences between them. The choice between tivars_lib_cpp, tivars_lib_py, and the new secret third thing should just be a matter of which language the user wants/needs to use. And it doesn't need to be in the Toolkit org, though we'd be happy to have it there.


That's my desire. Reference implementations and libraries for anyone wanting to do this sort of thing. Between python, cpp, and .net (and the resulting webasm library we could have from that), we'll be enabling people no matter their language choice, and the more they all work the same, the easier it is for any one of us to offer support even when outside of our own language.

My own editor would then just reference the nuget packages--guaranteeing I keep things up to date in that regard Razz
So would this be an on calc editor with a broad range of tools and languages?
TI-BASIC is freaking crazy. Beware, traveler.
TI-Toolkit has put some serious thought-effort into this; I'll summarize our thoughts, the rationale, and attribute insights to specific people where possible. This is a problem that’s plagued my mind every four months for the past two years, and I’m happy to have an opportunity to write it all down.

The first thing is using our token sheet. They contain useful things for such an editor, like common keyboard-accessible ways of typing things, renderings in the TI-Font, elaborate token histories (“I have this calculator and OS, what tokens are supported and what do they look like?”- if you properly parse the sheets (and we have example Python code for doing so)), and more. A means of defining translations is defined but not implemented, and the whole structure is verified in CI; it has been well-engineered to support this use-case (I do not expect to make any backwards-incompatible changes to add any of the things I want to add). Furthermore, it contains a number of substantive corrections to the TokenIDE data used by said application and SourceCoder.

A JSON version of the data is available, with all the same information, and there is a script for exporting a specific snapshot in time as a TokenIDE-compatible file (i.e. generate me a TokenIDE sheet for a CE, version 5.3.0).

Adriweb has a CSV which has details for all of the command arguments, though this is not tracked through history, for my understanding (and like most CSVs, it’s a mess <3). Some work needs to be done to merge this data in, but it will be done in a backwards-compatible way. Looks like I was confusing this with his JSON, distinct from the JSON mentioned above, which is a confluence of token data and command data from various sources, see his remarks below.


The next thing is actually turning text into a token stream.

First, let’s examine what we want:

  • It should be predictable (i.e. have reasonable defaults for every circumstance).
  • It should be interactive and configurable, for when the reasonable defaults do not describe what you want.
  • It should be obvious at-a-glance what is going to be produced.
  • We want something which is an involution; i.e. every program can be exported, imported, and exported again and you will receive the same program. This sounds easy, but I believe every single program editor today fails at this one. This means overriding the reasonable defaults on import.

I note the absence of backwards-compatibility with existing text formats. There are too many of them, and all of them suck. A good effort should be made to do something intelligent with them-- and indeed, many of them made their way into the tokens sheet as variants, so properly supporting the token sheet would be an intelligent thing, but it is not a priority of mine.

Let’s learn from how the existing options with the widest adoption fail to meet these common-sense wants.

SourceCoder&TokenIDE maximally-munch tokens regardless of context (though there is a little bit of extra sauce to handle backslash-escaped tokens). This is a significant problem; it does not do what the user expects. Consider the case of an English-speaking program author writing menus, which is then downloaded by someone with their calculator language set to French. If the program contains and in a menu, as it is likely to do, the French calculator will display a sentence which is entirely in English, except for one word in French. This is exacerbated by tokens like LEFT and RED existing only on certain calculators, and even further more by the fact that the token translations may now cause some lines of text to run off the screen- you need to know all of the tokens and all of their translations in order to know what your program will look like, but that’s pretty obviously the entire point of having tools like the editor. What’s more, if TI decides to add a new token in a future OS release, it might retroactively change what sequence of tokens a string is tokenized into. This is why having history-aware token sheets is important.

The TI-Planet Project Builder does something smarter: it maximally-munches while out of strings and minimally munches while in them. This breaks a different set of uses; TI allows you to execute some code stored in strings and if that code is minimally munched it will ERR:SYNTAX. Furthermore, Send( does some amount of string interpolation for some reason (I think an intelligent editor should do intelligent things here), and minimally munching also breaks that.

The prevailing consensus is that tokens <-> pure text is a lost cause if you want the plain text to match what you see on-calculator; there must be an additional layer of information to resolve these problems. There are two distinct reasons why someone would want a textual format for their tokens. The first way is to communicate with others; I think just exporting a list of token displays or accessibles (to use the attributes defined in the sheets) is the best way to do this. I much prefer this option because it is extremely obvious what is going on and reasonably backwards-compatible. Humans will figure out what tokens to use where and it looks pretty in a forum post, and it’s reasonable to expect a token editor to make a good attempt at parsing this (minimal munching in strings except when it’s obviously code and maximal munching in code counts as a good attempt in my book)

The second use of tokens-as-text requires a bit more subtlety and it is where round-trip accuracy is imperative. I would like to be able to use git to version my projects, and an IDE is singularly capable of handling both setting up the repository well, and saving and loading all files. commandblockguy and Tari independently proposed a delimited format. I’m ambivalent on using this in general, but I think this is an excellent use-case for it. So long as all contributors are using the IDE, you get nice diffs and such. Our token sheets require that accessible names are unique, so I propose U+200C-separated accessible names as a good hybrid of human and machine readability for version control.


Token editors should operate on the tokenized form internally and convert to text only when necessary (i.e for displaying and exporting). This is backwards from what the existing software does but is the most reasonable way to ensure round-trip integrity.

As for display, TokenIDE’s token-underlining feature is perhaps its greatest strength. Wavejumper, KG, and I discussed this in HCWP recently, actually, and had the idea that text strings which are being minimally munched but could be maximally munched could get a dotted underline, and users could right-click to switch between minimal and maximal munching
That’s pretty much everything, though I did not provide every reason why I like the system I described; please ask me to clarify things which are unclear. I removed a couple sections from initial drafts for brevity!
Very interesting topic and I hope this all leads to more community tools!
I'm myself busy (sometimes...) with eventually updating TI-Planet's PB with a revamped TI-Basic editor powered by the latest tools with our gathered info, hopefully it will be a nice online editing experience...

iPhoenix wrote:
Adriweb has a CSV which has details for all of the command arguments, though this is not tracked through history, for my understanding (and like most CSVs, it’s a mess <3). Some work needs to be done to merge this data in, but it will be done in a backwards-compatible way.

It's true I use a CSV (I blame my past/young self) for tivars_lib_cpp but this will eventually be migrated to a better and more recent thing considering what we've all been working on... That said, there's already something used here and there, based on the TI-Toolkit XML and many other things at the same time: the ti-toolkit wiki page generator, which outputs, in addition to the wiki pages (for instance here), a big JSON with lots of info (currently powering the /tok command on various Discord servers, and soon hopefully, TI-Planet's PB)


iPhoenix wrote:
The TI-Planet Project Builder does something smarter: it maximally-munches while out of strings and minimally munches while in them. This breaks a different set of uses; TI allows you to execute some code stored in strings and if that code is minimally munched it will ERR:SYNTAX. Furthermore, Send( does some amount of string interpolation for some reason (I think an intelligent editor should do intelligent things here), and minimally munching also breaks that.

Indeed I got feedback from broken programs when I realized people were using TI-Basic in French, and I worked around that with min munching for strings by default, hah. But yes, I need to take care of the remaining edge cases of the Send(/eval( strings that actually do rely on tokens in there... It's on my todolist heh


iPhoenix wrote:
The second use of tokens-as-text requires a bit more subtlety and it is where round-trip accuracy is imperative. I would like to be able to use git to version my projects, and an IDE is singularly capable of handling both setting up the repository well, and saving and loading all files. commandblockguy and Tari independently proposed a delimited format. I’m ambivalent on using this in general, but I think this is an excellent use-case for it. So long as all contributors are using the IDE, you get nice diffs and such.

Years ago now, Jacobly had proposed something similar, I believe it was in the context of adding TI-Basic editing support to CEmu's program viewer (powered by tivars_lib_cpp, it was one of the intended usecases after all).
In fact, you can see he used &hairsp; to delimit tokens here (manually, for the showcase): https://jacobly.com/tifonts/
Within the past 24 hours, there's been a spurt of investigation to determine what exactly TI-Connect CE does, because it's a good first step for coming up with our own compatible tools. I've known that it's "better" at the text -> program task than our community tools for a while, but I was surprised by its necessary complexity. I'm indebted to Adriweb and Logical for much of the impetus and insight here, though the conclusions, inferences, and specifics here are mine. Both the C++ and the Python tivars libs are undergoing an update so they can be better compatible with TI's calculators. For one, it is program-and-list-name aware; they will always tokenize as individual letters (think prgmWHITE- maximal munching would produce a WHITE token). As a reminder, program names can be 8 letters and list names can be 5.

TI appears to consider someone using TI-Connect CE to tokenize a program which they actually intent to send to a B&W calculator- this addresses part of the "what if TI decides to add a new token in a future OS release" issue which I've already established a solution for in my post above. However, this behavior is not documented elsewhere, and I'd like people to be aware of this. Within strings, TI avoids producing tokens which would produce an incompatible token on a monochrome calculator (specifically, token version 0x0A or higher), with the exception of eval( being tokenized as a single token in a Send( string and piecewise( always being tokenized (I assume because it's the only token which has changed name & locations over the years).

I note that TI doesn't do anything special for the contents of eval(s in Send( strings- they can and probably should.

Because many of the lowercase letters having multiple options, TI uses the following reasonable defaults: (first entry is a, last is z).

5E80,5E81,5E82,6202,6212,6216,6217,6218,6219,621A,6222,6223,6224,6234,BBB5,BBB6,BBB7,BBB8,BBB9,BBBA,BBBC,BBBD,BBBF,BBC1,BBC8,BBC9

LogicalJoe says a more reasonable default is to use the true 0xBB lowercase tokens in strings, and the others elsewhere where possible, and I completely agree with him.

So yeah, there's some considerable engineering here, no doubt occurring over several rounds of user feedback. Most of my goals above are completely out the window, but we can still learn from TI's insights when designing our own compatible systems.
  
Register to Join the Conversation
Have your own thoughts to add to this or any other topic? Want to ask a question, offer a suggestion, share your own programs and projects, upload a file to the file archives, get help with calculator and computer programming, or simply chat with like-minded coders and tech and calculator enthusiasts via the site-wide AJAX SAX widget? Registration for a free Cemetech account only takes a minute.

» Go to Registration page
Page 1 of 1
» All times are UTC - 5 Hours
 
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum

 

Advertisement