» Goto page Previous  1, 2, 3 ... , 9, 10, 11  Next
» View previous topic :: View next topic  
Wow, another year has passed already. This means I had some time to polish up the program a lot! It's been published as a beta on Github. If there's no major issues then I'll make it a full release.

Download: v2.1.0-beta.1

I started by splitting the huge main.cpp file into a bunch of smaller files. I also decided to add limited versions of std::vector, list, and map. This increased the file size but also made it significantly easier to improve the program. Some small things this allowed was alphabetically sorting picture names, and rollover scrolling when going past the end of a list.

It also allowed much bigger features! Caching the pointer to subimages almost halved the time to re-draw a full picture! This is because searching through the VAT had a large performance penalty especially with large images with tons of appvars. Bypassing the VAT reduced the overhead significantly.

I also added screen shifting which massively improved panning around large images. Rather than clearing the screen and redrawing the entire image, I copy the portion of the image that will be kept after the pan is complete, paste it in the new location, then I just draw any missing subimages to complete the picture.

Lastly, I fixed an issue where the zoom feature wasn't always zooming in. Now it will properly zoom in and out by by 10%. It will also zoom out significantly further, almost to the point where you can't make out the original image. Not super useful but still fun.

I've had a number of users mention Cabri Jr. would freeze or refuse to show Artifice in the Open menu when they had more than about 4 pictures on the calculator (about 48 appvars). I never caught this issue since I always ran the program through Cesium.

The solution is to run HD Picture Viewer via Cesium or ASMHOOK. Unfortunately this means users will need to manually delete pictures from the memory management screen until Cabri Jr let's them launch artifice.

In other news I now have access to a preview of a 16 bit color library! I'll be testing it out hopefully next weekend.
how would i make an image into a standalone prgm (basically displays the sprite for the image and waits for the user to press clear before exiting)

Edit: fixed size 320x240 img
the CE guy wrote:
how would i make an image into a standalone prgm (basically displays the sprite for the image and waits for the user to press clear before exiting)

Edit: fixed size 320x240 img

You can use the toolchain background example program: https://github.com/CE-Programming/toolchain/tree/master/examples/library_examples/graphx/background
thx il try that when i get the chance
Thanks to Tiny_Hacker and Roccolox Programs, there's now a 16 bit gfx library! I started slowly implementing its features months ago but finally got to a somewhat presentable point.

Although there's improvement, there's still a lot more color banding than I expected. I don't think this is due to the 16 bit library, I think it has something to do with convimg not converting properly? (I tried an old January nightly build and the latest 10.1 release). I'll have to make an issue and see if the banding can be improved, perhaps with dithering.

Here's the benchmark results. I verified the CEmu images look the same on a physical calculator.
Top left: 32bit source image
Top right: 16 bit expected result
Bottom left: 16 bit image capture from CEmu
Bottom right: 8 bit image captured from CEmu




Since the library is early in development, some features like panning and zooming had to be disabled. Somehow nested maps are broken so subimage caching is also disabled. I don't know if that's related to this library though. The library is pretty cool though and I'm looking forward to getting everything working!



(camera smooths out the banding)
I added dithering to the converter today. It barely makes a difference with images of subjects, but makes a huge difference with color gradients (especially magenta to red). There's still a major band in the green area and I'm not currently sure how to resolve that.

Below are some gifs that switch between dithered and non-dithered.


I think I resolved my broken maps by disabling a free() that was freeing unallocated memory. I also refactored the converter's code so the main file is slightly less of a monster.

Edit:
I realized my test pattern wasn't covering as many colors as it should have to stress-test the program. I added a dark gradient to it and boom! You can notice a massive difference between the old 8bit library and 16 bit library! I can still do better but this is good enough for now.

I'm as finished as I can be with the current state of the 16 bit library. I've fixed deleting images, the help screen, and added a bit more color to the splash screen.


These are some gifs that show the difference between the 8 and 16 bit library.
  • The worse-case-scenario gradient shows an obvious difference.
  • The colored noise is harder to see a difference but the flickering is noticeable.
  • The puppy picture is very difficult to tell a difference; it's impressive how high fidelity you can get with just 256 colors! I had to zoom in to see the change (car's tail light, foot in the bottom left corner).

I'm still quite happy with the 16 bit library since other images I have will benefit a lot more from smoother gradients. It also allows me to use proper grey colors in my UI without needing to use up palette entries in every picture. Also, the gradients will look a lot better when zoomed in.

However, there's still an obvious benefit 8bpp, particularly file size. Using zx0 compression: the colored noise picture takes up 77kb in 8 bit mode, but it takes up 152kb in 16 bit mode. Next update I'd like to support a wide range of bpp modes (1,2,4,8, and 16). That way the user can decide whether they desire higher fidelity or lower file size.

To continue this project, I would just need the 16 bit library to support the equivalent of:
  • gfx_ScaleSprite() so I could display images greater than 240p.
  • gfx_CopyRectangle() wouldn't be required, but would speed up panning significantly.

For the time being, you can try out the 16 bit version of HDPICV here: https://github.com/TheLastMillennial/HD-Picture-Viewer/blob/dc4320a27d5482cd192859d54357b24903a883a9/bin/HDPICV.8xp
Here's an example image: https://1drv.ms/u/c/b27ced2546bad95f/Eb2R8ot7w01Es2FQSQuABuUBh-B8Pr4ZeNkccyvkiDsWOw?e=Jpg4kR
Looks great! I had to refer to your notes to see the difference with the dog picture, which shows the strengths of targeted 8bpp but the possibilities that 16bpp opens up are great!

I had an idea of suggesting a programming competition based on 16bpp graphics ... which you would have surely won for this! ๐Ÿ™‚
It's that time of year again! The 16 bit library makes beautiful pictures but even with compression, they're still very large. If you just want to view something simple like a document, there's no need for 16 bit or even 8 bit color. That's why I've added support for 1, 2, 4, 8, and 16 bit color!

1 bit | 2 bit | 4 bit


(Dithered) 1 bit | 2 bit | 4 bit


8 bit / 16 bit (for comparison, it's the same gif from previous post)


When displaying a black and white image (with compression) 1bpp files are 60% smaller than 16bpp files and 40% smaller than 8bpp files. I recently learned that convimg had dithering support built in this entire time, and it makes a massive improvement to image quality, so I'll be adding that to the converter as well.

This isn't all that's planned for this break. Tiny_Hacker and Roccolox just finished implementing 16 bit versions of ScaleSprite and CopyRectangle which means I can add back features like panning and zooming! That seems like it'll be a bit easier than adding different bit level support which is why I put it off for later. Once I get those features working I'll be ready to release HDPIC v3.0!
Adding the new ScaleSprite and CopyRectangle functions took more time than expected, but mostly because I was still allocating picture memory with the old gfx library instead of gfx16 library which was breaking functions like zx0_decompress() ๐Ÿ˜„

Nevertheless, images can now have anywhere between 2 - 65,000 colors with better dithering! HDPIC will automatically switch between gfx libraries when needed. I got the program stable enough for an alpha release before I lose time to work on this again.

Download: https://github.com/TheLastMillennial/HD-Picture-Viewer/releases/tag/v3.0.0-alpha.1


First gif shows the different bpp levels. The second gif is showing a 16bpp 2000x1600 (1.6mb) picture properly displaying with zoom and pan features!

There's a few bugs:
- Allocating picture memory with gfx16 doesn't work but I managed to work around it well enough for now.
- Occasionally not all image get detected especially when one of the images is super large.
- Visual bug when switching between gfx libraries often causes a white flicker. This is apart of the gfx library initializing and may not be fully resolvable.
- Visual bug when panning to the right causes portions of the screen to get incorrectly duplicated.
Happy new year everyone!

For the longest time I've wanted to allow multiple compression methods for the pictures: zx7, zx0, and no compression. I didn't know the performance impact of each method so I performed some tests. I decompressed the same appvar 250 times and these were my averages (all values were within 1% of average):
  • no compression: 1.7 ms
  • zx0: 6.0 ms (3.5x slower)
  • zx7: 8.6 ms (5x slower)

It's pretty clear zx0 is the winner. Not only is it 2-7% better at compressing than zx7 but it's a bit faster too. I suppose zx7 is only still in the toolchain for backwards compatibility reasons?

I added all the compression methods already but I haven't decided whether I'll keep it. It's working pretty well so far but no-compression has a visual bug I haven't figured out yet.


I tested the speed of the compression methods because I think my next big update will be adding GIF support. I'll be using Zerico2005's GraphY library limited to 160x120 8bpp to avoid screen tearing. I'm aiming for at least 10fps but I think can get 15-20fps even with zx0 compression.

Enforcing zx0 compression for GIFs is probably the way to go since there's not much reason to have ~25% higher fps at the cost of much shorter GIFs. If I use transparent sprites to only overlay new data onto old frames, compression will allow me to significantly increase the length of GIFs.

One last thing, I fixed the bug where not all images were detected. I wasn't resetting the appvar search position. ๐Ÿ™„
Yo how do you get the cool banners at the bottom? also HOW ON EARTH IS IT SUCH HIGH QUALITY?!?!?!?!!?!??!?! Respect!
In order to reduce complexity and avoid a bit of scope creep, I've opted to remove multiple compression methods. I'm glad I tried it, but now I know for sure zx0 is the way to go. Anyways, I got a bit antsy and started GIF support already.

There's three main concerns to address: Storage, frame rate, and memory constraints.

Storage:

Storage constraints are a huge concern for animations since worst-case, the calculator would only be able to store about 40 frames of full resolution. Of course, zx0 automatically saves ~40% on each frame, but that still only increases the maximum number of frames to 66. For now, I've halved the resolution to 160x120 which instantly quadruples how many frames we can store to 264. This is only temporary so I need to save storage space some other way.

For most GIFs, there's a lot of data that doesn't change each frame such as static backgrounds. Instead of storing all the data to re-create each frame all on its own, I only store the parts of the image that are different from the previous frame. This allows me to just overlay the pixels that actually changed onto the previous frame. This makes compression much more efficient. The magenta color represents the colors that didn't change from the previous frame.


(First frame | Second frame)

While this significantly helps for clean GIFs with static backgrounds, tons of GIFs are low-quality noisy conversions of videos. Noise really hurts compression. We can mitigate this by binning colors so similar colors become the same color. I'm currently reducing the bits assigned to RGB to 3-3-2. You can see there's a massive difference when comparing these two frames. More magenta generally means better compression.


(Both are the same frame from the same GIF. The first has full color quality. The second has the colors binned.)

After all this, how many frames can I store? I converted about 20 seconds of Never Gonna Give you Up which is a decent stress-test since it has lots of motion and significant scene changes. It outputted 300 frames totaling 1.2MB. That means the calculator can store over 600 frames!

Frame rate:

My target is 10fps with a stretch goal of 20fps. HDPIC was originally designed for displaying static pictures so render time wasn't that important. This meant I was finding each appvar at render-time. This actually isn't too slow but I'm trying to cut processing time wherever possible. Now the program finds the pointer to each appvar during initial startup. That way when it's time to call a specific frame, the calculator knows exactly where to look for the data.

This works really well and I can get ~40fps when displaying the GIF at 160x120. If I try to make it full screen using ScaleSprite_NoClip() this destroys the frame rate down to just ~10fps. I don't think this is actually a huge deal. I plan to switch to native resolution GIFs anyways which means no scaling will be necessary. I fully expect I'll still hit above my 10fps target.

Of course, I'm storing the proper frame times so the calculator only displays a new frame when necessary.


(You can see the slowdown when entering full screen)

Memory Management:

HDPIC was originally written to only use dynamic memory allocation since a) that's all I knew at the time and b) since I was using scaling, I didn't know how much memory to allocate at program startup. This started causing memory fragmentation and instability as I started reserving more and more memory.

After a gentle berating from Mateo, I've started learning how to use static memory allocation. There's some parts that are easy to allocate statically. For example all sub-images are always 80x80 and all GIF frames are currently 160x120. The memory quickly starts piling up though.

8bpp images:
- 6,400 bytes for the initial storage where the appvar data is stored to.
- 6,400 bytes if I need to bit-unpack 1,2, & 4bpp images.
- Up to 57,600 for a subimage scaled up to 240x240.
16bpp images (everything is twice as big as 8bpp):
- 12,800 bytes for the initial storage where the appvar data is stored to.
- Up to 20,000 bytes for a subimage scaled up to 100x100 (this is artificially limited).
GIFs:
- 19,200 bytes for a 160x120 frame.

I only foresee memory usage increasing as I switch to 320x240 GIF frames and add other features in the future. I can't statically allocate all this memory at the same time. However, I realized I don't need all the memory allocated at the same time. A 16bpp image will never be displayed at the same time as a 8bpp image or GIF. Likewise an 8bpp image will never be displayed at the same time as a GIF or 16bpp image.

This means I've been trying out using a C++ Union. This basically allows me to use the same memory space for different variables. This has worked pretty well so far! The memory is always overwritten with picture data before it's read so there's never an issue of reading leftover memory from a different picture mode.

Conclusion:

What does all this look like in practice? I'm pretty happy with the results so far! CEmu won't transfer all the appvars for some reason so there's way more artifacting than you see on a real calc.


Real-world example (a single frame got corrupted): https://i.imgur.com/mhWmMzs.mp4

I guess I'm obligated to run Bad Apple on this at some point...


Voblit wrote:
Yo how do you get the cool banners at the bottom? also HOW ON EARTH IS IT SUCH HIGH QUALITY?!?!?!?!!?!??!?! Respect!
You can add banners in your Cemetech profile settings. The quality is much higher since I'm storing unique color data at full resolution rather than TI's approach which upscales from a very low resolution.
Is there a way to convert images without a windows computer? Perhaps a web converter?
Yes, there is a web converter (that doesn't (yet?) support whatever's changed in the image format for version 3.
When the viewer gets updated to support GIFs, will the web converter be updated too?
I don't plan on making my own web converter but I do plan on porting the code so it's more cross-platform. The framework the converter was originally written in has fallen out of support so an overhaul is already necessary.

I wouldn't recommend Tari to update their converter yet. I'm still in alpha state so the image format is still up to getting changed. I can post about the format once it's more finalized.

I recommend using Wine in the meantime.
Is there a release of the GIF-supported version?
No, everything is still in alpha state. And no, I don't have a time line for when things will be released. You can always look up how to build the code from source. The github repos are linked in the first post.
  
Register to Join the Conversation
Have your own thoughts to add to this or any other topic? Want to ask a question, offer a suggestion, share your own programs and projects, upload a file to the file archives, get help with calculator and computer programming, or simply chat with like-minded coders and tech and calculator enthusiasts via the site-wide AJAX SAX widget? Registration for a free Cemetech account only takes a minute.

» Go to Registration page
» Goto page Previous  1, 2, 3 ... , 9, 10, 11  Next
» View previous topic :: View next topic  
Page 10 of 11
» All times are UTC - 5 Hours
 
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum

 

Advertisement