Login [Register]
Don't have an account? Register now to chat, post, use our tools, and much more.
I've written a Python program to parse the XML token specifications used by TokenIDE, and produce a (f)lex based lexical analyzer on the fly.

I've tested it with both the pure-basic and default library-enabled token sets without difficulty, and I have no reason to believe it should have problems with Grammer, Axe, or even Prizm BASIC tokens.

Current caveats:
  • The maximum token size is 8 bytes (or more technically, the size of a long int on your compiler), if this ever needs to be changed, it can be, but I don't foresee that being a problem in the near future, and this way was much faster to code.
  • As far as I can tell, the lexer just strips unrecognized characters from the output. Additionally, it currently doesn't work with the alternative attributes, only those specified as the "string" attribute of the main <Token> element for each token, but I think can add support for that tomorrow without too much work, before I upload it.
  • No parsing or syntactic analysis yet. So if you use comments as supported by TokenIDE, they won't get automatically excluded like they should.


The output is a .bin file suitable to fed to binpack8x, wabbitsign, or the TI-header-writer of your choice (don't know what this toolchain would be for Prizm). Wabbit will complain that your file doesn't begin with $BB$6D, since it isn't used to BASIC programs instead of assembly programs, but it will do it's job anyway. Also, not sure if this is a caveat, but your programs will probably be locked by default on calculator, since most assembly oriented signing/header-generating tools do that to keep you from mucking about with the hex.

In the long haul, I would like to use this tool to add lexical analysis and "compiling" abilities for TI-Basic to other programming text editors (probably vim and Komodo Edit first).



Very cool, elfprince! This is relevant to my own work; I recently re-wrote a PHP-based tokenizer and detokenizer that leverages the Tokens tokens for a project of my own. To Shaun's credit, I must say that they're very well-annotated and organized.
https://github.com/elfprince13/TITokens

Comments work (as long as they're on their own line, which I believe is the same as the behavior used by TokenIDE, though shaun should correct me if I'm wrong on that), and alternate symbols/representations for Tokens work, as long as everything is nice and UTF-8y.
This looks like it would go along nicely with, TITools (originally by Ben Moody currently maintainedish by Me), TIUtils (unfinished), and TIPack (by Ben Moody) as a complete comand line suite for the TI Calculators.

Now I'll actually have a reason to get back to work at finishing TIUtils. Since this can handle all the tokenizing I can just accept bin's for String and Equation support. Though TIPack is also pretty close to having much of the feature set I need so I may just look into reusing some of that code a well.
I've been toying around with making a thumbdrive-sized barebones Linux distro for calculator programming. Want to see if I can't get WabbitEmu running as a framebuffer/console application.

And add syntax highlighting for z80 assembly and TI-Basic in VIM.

It would be particularly awesome if AHelper also makes progress on the LLVM stuff.
I've got a Komodo Edit based syntax highlighting plugin working pretty well, minus numbers (which need a little regex-foo), and certain keywords which have spaces after them don't get promoted from identifiers. It's already cool enough to highlight commands with spaces after them only if they have the space.

Double post but it's okay: here's the download if anyone wants to try it: http://sfgp.cemetech.net/komodo-plugins/tibasic-1.0.0-ko.xpi

Also, in fun news, a Komodo developer/ActiveState employee commented on my youtube video only a couple hours after I posted it! No idea how/why he found it, but it's fun Smile
Elfprince, given that I made my SC3 preprocessor generate list of keywords and such (a la this), would you be interested in helping me write a general state machine for syntax-highlighting BASIC, Axe, and Grammer code for CodeMirror?
Are we targeting javascript? Might be worthwhile to throw my lexer at emscripten as a starting point and see how that goes. If that works and you build a numeric-token-value ->color LUT it would probably be pretty fast.
elfprince13 wrote:
Are we targeting javascript? Might be worthwhile to throw my lexer at emscripten as a starting point and see how that goes. If that works and you build a numeric-token-value ->color LUT it would probably be pretty fast.
Javascript, yes, but we'd be working with the text representation rather than the numeric representation so that on-the-fly syntax highlighting is possible. Here's how CodeMirror's state-machine stuff works:

http://marijnhaverbeke.nl/blog/codemirror-mode-system.html
TI-BASIC and Axe syntax highlighter engine for CodeMirror.

Cheers Smile

EDIT: It's minified. Here's an expanded version.

EDIT2: Ignore the weird stuff in the first 56 lines. All that you need is the functions starting in line 57 and 132 for TI-BASIC and Axe, respectively. Grammer highlighting hasn't been implemented yet.
Right. My suggestion is to take my existing flex-based tokenizer, compile it to javascript with emscripten, and have something that looks like this:


Code:
return styletable[tokentypetable[yylex()]];


For that to work, we need two tables based on numeric values of tokens.
Deep Thought, that's very kind of you, but since this is for a semi-competing project, I'm hesitant to take advantage of your hard work. Smile If you have no qualms about that, I'll go ahead and give that a try.
My lexer compiles to JS quite easily! I really think it should be easy to plug into CodeMirror. The tricky part will just be adapting the buffers to work with the stream + state object provided to the token function, but I don't think that should take too much work.
Bump, it generates highlighters for VIM now as well, though the Unicode/UTF-8 compatibility is broken with VIM at the moment since they use different escape sequences.


[edit]

Not true. Everything is fine now =) I added slightly weirder escaping code, but it's all good, and the VIM version supports Unicode characters.
*bump*

This has been added to the community package manager for Komodo, in case anyone is still paying attention.
  
Register to Join the Conversation
Have your own thoughts to add to this or any other topic? Want to ask a question, offer a suggestion, share your own programs and projects, upload a file to the file archives, get help with calculator and computer programming, or simply chat with like-minded coders and tech and calculator enthusiasts via the site-wide AJAX SAX widget? Registration for a free Cemetech account only takes a minute.

» Go to Registration page
Page 1 of 1
» All times are GMT - 5 Hours
 
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum