Well, since the z80 has a von Neumann architecture, we can write directly over the code we are running on the fly.

We can open this can of worms for some significant performance improvements. But I can't think of any right now, probably because my knowledge of z80 is pretty close to nonexistent. Rolling Eyes

I do have one in mind right now, actually. Maybe. Suppose we have a piece of code which is supposed to write something into memory or push a register into the stack which will influence the outcome of a branch later on. But instead of doing this, we can just write directly on the memory address that represents the branch and has the relevant instruction to jump. This way, the CPU is spared from having to perform a comparison later on, simply by immediately giving the branch a predetermined outcome unless changed later on.

This is why I lose time thinking about z80 code and never end up learning anything or getting anything done. I just overthink it. I think about insane ways to optimize my code, which are hard to get to work since a tiny change or addition can throw everything off. I need to stop reading about the black magic in the restricted section and think about real, unoptimized, practical working z80.
Regarding optimization:

I had a similar problem years ago when I briefly dabbled in Z80 ASM. Everyone always used to stress the importance of optimization so highly that I became concerned about writing only “good”, optimized code, which got in the way of getting much done, especially since I was just a beginner and had not even had any experience to teach me how to write “good” code yet.

Nowadays, there's a popular practical philosophy regarding optimizing code: Don't. Razz At least, not while you're still trying to write the program. Just concentrate on getting the code working at all first. And if it works and is small and performs well enough, don't bother optimizing, unless perhaps you have time to burn. And if it doesn't, perhaps there is an alternate algorithm that would be faster and easier to discover and implement than optimizing the code (which is not necessarily guaranteed to produce the performance or size gains needed).

In other words, only do optimization if absolutely nothing else works or you have a bunch of time to spare, especially optimization that increases maintenance cost or impairs legibility. Usually slower but safe and correct code is more desirable than faster, buggy or crash-prone code.

Of course, this idea may not sit well with those who coded “on the bare metal” back in the days with machines with processor speeds of, oh, one microhertz and memory sizes of maybe half a bit or so Razz and prided themselves in still doing something impressive with them. But with modern software development on reasonably capable but very complex computer systems, this is usually the preferred advice.

Regarding self-modifying code:

It can be pretty creative and fascinating, but a lot of the above applies. It often makes code harder to read and maintain, and this typically makes it not worth the (usually only modest) performance gain. Although it generally works on old processors like Z80, IIUC, modern sophisticated processors with prefetch caches and the like don't lend themselves well to writing self-modifying code. (Also consider cases where your code might be running from non-writable memory, such as flash, or on modern protected-memory OSs which purposely have protection measures in place to prevent programs from writing to themselves, to avoid security bug exploits. Self-modifying code therefore is less portable and flexible.)

All the above is mainly said from a practical software development standpoint, though; there's nothing wrong, I think, with exploring things like this for fun. It might be stimulating and educational.

(Personal anecdote that might be interesting: Several years ago, I was reading about some of the earliest commercial computers. I actually came across online scans of programming manuals for one (the UNIVAC I, I think). I found it interesting that the machine language it used was designed such that implementing program loops was done exclusively through self-modifying code--that was actually the intended, officially-suggested way to do it.)
I agree 100% with Travis.

I had plans for an amazing OOP language for z80 (Antelope) that never got completed because I was too focused on trying to make it perfect, both in efficiency and in trying to offer high level mechanisms without excess overhead.

I even had a special "assignable-switch" construct that acted like an enum, but its value was stored directly within the jump instruction at the top of the switch, and the assignable values were actually the adresses of case statements within the switch.

Pretty clever, but the choice to use that means the programmer is concerning himself with low-level details that the compiler should be taking care of for him. Perhaps a good compiler would recognize the pattern and use this trick without the programmer knowing about it, but that's a small gain for a very specific situation that didn't occur very often.

One cool compiler trick that involves SMC and actually plays off, is storing local variables within the instructions that use them. This makes variables cost less than nothing, because they don't take up extra memory and don't have to be fetched from memory (well, not apart from the machine code).

Even then though, the gain is relatively small overall, and can have other down sides (e.g. to allow recursion, you need extra work to copy to / from the stack).
Also, if you're really trying to find something fundamentally new, you will never get there through optimization: Alan Kay - The computer revolution hasn't happened yet

Also also, if you're REALLY interested in the potential of self-modifying code, or code generating code, that's more in line with LISP or Scheme (or Rebol), where code IS a datastructure. Without that, SMC is just an optimization, and as stated in the Kay video, that doesn't get very far; whereas the other end of the spectrum is a system that is the engine of its own modification & replacement.
I do tend to agree that working code is the best place to start, then you can optimise from there.

Self-modifying code can make for some good optimisations, but can also be overused.

Some of the things you suggested are indeed what makes SMC useful, such as modifying the address of a call or jump before executing the instruction or updating a constant in code etc.
shkaboinka wrote:
Also also, if you're REALLY interested in the potential of self-modifying code, or code generating code, that's more in line with LISP or Scheme (or Rebol), where code IS a datastructure. Without that, SMC is just an optimization, and as stated in the Kay video, that doesn't get very far; whereas the other end of the spectrum is a system that is the engine of its own modification & replacement.


Metaprogramming is an interesting concept and is something I've done on the HP 50g, whose programming language was heavily inspired by Forth and LISP. (Doing this is somewhat limited in pure UserRPL but is fully possible and trivial in SysRPL, or in UserRPL with the help of a library or small wrappers around certain SysRPL calls). I think I once made a “Merthese compiler” this way, and I also used it in a SysRPL program to try to squeeze every possible bit of speed out of it. In the latter case, I basically had a tight, inner loop that was dynamically regenerated upon certain events to include just the code that actually needed to run for the given situation. Arguably, I made a mistake by not actually testing whether such an elaborate optimization was truly necessary before adding that sometimes rather confusing layer of “meta-ness” to my code. But I certainly learned something and gained experience doing it. Wink
Definitely agree with the points that Travis has made as a general rule of thumb - the most important thing is to have working code. Then if you need to or want to, you can optimise from there.

Back in the days of the TI-83, I tried my best to optimise every little bit of my code, although 95% of the time it wasn't for speed, more for program size. But I generally followed the above concept; I would write the code, and then when it was working as intended I would go back later in development to see what I could shave off Smile
How about an entire self-modifying, self-defined, self-executing system?

https://www.cemetech.net/forum/viewtopic.php?p=264811#264811
oldmud0 wrote:
I think about insane ways to optimize my code,


Premature optimization is the root of all evil.

First, you need working code (which may require some degree of optimization if the target platform is resource-limited).

Then you need code that it readable, easy to understand, debug, modify and extend.


I would only open the can of self-modifying code worms after a very, very thorough review of it necessity. If you can save the company $0.25 per produced unit by being able to use a smaller processor in a product that will be produced in volumes of >100ku/year, better >1Mu/year, maybe. If it is a project for your own education or fun, sure, go for it.
CtrlEng wrote:
oldmud0 wrote:
I think about insane ways to optimize my code,


Premature optimization is the root of all evil.

....


Do be aware of the age of the original post.
  
Register to Join the Conversation
Have your own thoughts to add to this or any other topic? Want to ask a question, offer a suggestion, share your own programs and projects, upload a file to the file archives, get help with calculator and computer programming, or simply chat with like-minded coders and tech and calculator enthusiasts via the site-wide AJAX SAX widget? Registration for a free Cemetech account only takes a minute.

» Go to Registration page
Page 1 of 1
» All times are UTC - 5 Hours
 
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum

 

Advertisement