About The Blog
14 May 2025

Hey, this is my blog! Not much to say about myself, I just like programming and doing cool things. I wanted to make this blog to help other programmers, talk about coding opinions, and showcase some of the projects I make.
Heres some things I worked on in the past. I’ve worked on a lot more (like things in Haskell even) but some things are abandoned or I didn’t finish all the way through so I didn’t feel like commenting on them here:
This Blog
- Started: April 2025
- Finished: May 2025
- Languages: Go
Not much to say here. I really like Go too now and love its developer experience and have learned a little about Go with this project.
Comp-time DFA based lexer generator / Zig PR
- Started: January 2025
- Finished: Abandoned
- Languages: Zig
Normally I don’t cover abandoned projects but I got the code working for this one, just not all the bugs were ironed out, and this is about a language that I really like and just got into called Zig. I pushed Zig’s comptime to the max to generate a dfa for a regular language generator at comptime and realized that things like this should be left to intermediate compile time stages and not Zig’s comptime. I also made a PR to the Zig standard library fixing a small bug.
ekjson
- Started: September 2024
- Finished: November 2024
- Languages: C, Python
While in my sophmore year of college I worked on this. What started out as a simple exercise to write any library, turned out to be a big project of creating a small and efficient json parser. Throughout this I learned a lot about creating tools for development and generating code which actually led to my love of the Zig programming language. I actually created, atleast to my benchmarks something as fast as simd-json without simd. It uses a lot of SWAR tricks and the fast bepheloron implementation for parsing floating point numbers. I realized too late though that for projects you should really focus on things that make the end user have a better time and not put all your focus into optimization at the start.
ScummVM
- Started: May 2023
- Finished: August 2023
- Languages: C++, Bash, Make
The project optimized ScummVM’s rendering code, achieving a 5x improvement in the AGS renderer and a 2x improvement in the global rendering code. The student emphasizes the importance of strong mentor relationships and a structured approach to tackling large coding projects.
NextJS Portfolio Project
- Started: December 2022
- Finished: January 2023
- Languages: Javascript, OpenGL Shading Language
I wanted to do more to work on my webdev skills since it wasn’t something I was really great at at the time, so I decided to do a little stuff with NextJS and create a little portfolio.
Super Mario War Online
- Started: April 2022
- Finished: May 2022
- Languages: Javascript, OpenGL Shading Language
This was a small exercise to get better at writing javascript and web stuff. At the time I was really big into game programming so I picked a game programming project.
CNM Online / CNM Online Editor
- Started: May 2021
- Finished: WIP
- Languages: C, Rust, Make
This is a game I made with my cousins it has quite a few cool features and its codebase is over 40,000 lines of code. It has its own level format, renderer, and networking code. The networking code uses delta compression to make frame updates and other things sent over the wire not as big. In the level editor, it uses WebGPU and Rust to give a pretty good user experience for editing levels in the game.
Hackathon - Benjamin Blodgett Wyatt Radkiewicz
23 April 2025
Background
It took us a bit of time to figure out what is going on statistically. Basically we are looking at the difference between two players or teams and their respective “elo” or performance ranking. We can then plug the horizontal difference between the elo distribution of the two players or teams into a sigmoid equation to attempt to predict success of one team or player winning over the other. This sigmoid equation ranges between 0 and 1 because probability can’t exceed 100% or go under 0%. Our k value in the code represents the volatility of the added rating. A low k means the results of the event will have a low impact on the win success rating.
Improving the Model
First lets try to only use the later half of the games in the season for our model. We did this by changing the range of the games for loop. We divide the range by two and offset by half to get the most recent half of games.
Initial Results
We look at the average success rate and standard deviation of our prediction.
% | σ | |
---|---|---|
Starting Values | 0.7078 | 0.0569 |
Excluding Early Half | 0.6008 | 0.0809 |
Where % is the average success rate and σ is the standard deviation.
Rivalry Scoring
One idea we had was trying to determine whether two teams have a rivalry. If they are rivals, we can increase the k value correspondingly and increase the effect of the results on the rating. As you can see, this marginally increased our success rate of predictions, but at the cost of increasing the standard deviation.
% | σ | |
---|---|---|
Starting Values | 0.7078 | 0.0569 |
Excluding Early Half | 0.6008 | 0.0809 |
Rivalry Scoring (decay) | 0.7120 | 0.0590 |
We achieved this result with exponential decay. Repeated success one way or the other would decrease the rivalry amount.
Parity Scoring
Our next idea was to simply look at the all time victory and successes between two teams. If they had a more equal amount of wins and losses against each other then we consider them to be rivals. Alternatively, if their chance of success over time is more disproportionate, we do not consider them rivals, and hence the k value will be lower.
% | σ | |
---|---|---|
Starting Values | 0.7078 | 0.0569 |
Excluding Early Half | 0.6008 | 0.0809 |
Rivalry Scoring (decay) | 0.7120 | 0.0590 |
Parity Scoring | 0.7111 | 0.0624 |
This was perhaps because we weighted “closeness” too heavily. After adjusting the effect we were able to achieve our highest yet success rate with a standard deviation only marginally higher than baseline.
% | σ | |
---|---|---|
Starting Values | 0.7078 | 0.0569 |
Excluding Early Half | 0.6008 | 0.0809 |
Rivalry Scoring (decay) | 0.7120 | 0.0590 |
Parity Scoring | 0.7111 | 0.0624 |
Parity Scoring (light) | 0.7128 | 0.0579 |
Previously we look at the relationship between pairs of teams and their history. Next we want to determine what makes individual teams special. We do this by trying to find which teams are erratic in performance, and which are consistent. For teams we deem erratic, we increase the k value.
% | σ | |
---|---|---|
Starting Values | 0.7078 | 0.0569 |
Excluding Early Half | 0.6008 | 0.0809 |
Rivalry Scoring (decay) | 0.7120 | 0.0590 |
Parity Scoring | 0.7111 | 0.0624 |
Parity Scoring (light) | 0.7128 | 0.0579 |
Erratic Scoring | 0.7153 | 0.0630 |
We were unsure whether or not this made sense. If a team is incredibly consistent in wins or losses then a single data point to the contrary shouldn’t necessarily be weighted more heavily. It could be an outlier or indicate a temporary change in the roster of the team. Maybe a specific key player is performing above usual expectations, but rosters change with time, and high performing players may move on to greater opportunities on other teams or even outside of sports. What we really want to identify are the underlying factors which make a specific team successful or not over the long run. Thusly, we will next attempt to weight consistency with a higher k and a greater impact on the scoring.
% | σ | |
---|---|---|
Starting Values | 0.7078 | 0.0569 |
Excluding Early Half | 0.6008 | 0.0809 |
Rivalry Scoring (decay) | 0.7120 | 0.0590 |
Parity Scoring | 0.7111 | 0.0624 |
Parity Scoring (light) | 0.7128 | 0.0579 |
Erratic Scoring | 0.7153 | 0.0630 |
Erratic Scoring (inverse)1 | 0.714 | 0.064 |
The data appear to fit our original hypothesis better. We must weight the k higher in cases where a team’s performance is erratic. Perhaps because it is an indication that the trend is changing. If this is occurring then more emphasis should be placed on newer results in order to replace the old status quo somewhat.
More experimenting with the parameters of the standard erratic scoring give us a better result.
% | σ | |
---|---|---|
Starting Values | 0.7078 | 0.0569 |
Excluding Early Half | 0.6008 | 0.0809 |
Rivalry Scoring (decay) | 0.7120 | 0.0590 |
Parity Scoring | 0.7111 | 0.0624 |
Parity Scoring (light) | 0.7128 | 0.0579 |
Erratic Scoring | 0.7153 | 0.0630 |
Erratic Scoring (inverse) | 0.7140 | 0.0640 |
Erratic Scoring (improved) | 0.7160 | 0.060 |
Next we try to determine whether or not one team is more or less tired than the other, and subtract elo slightly from the tired team. Our method is to see whether the last game played from the team was less than 3 days ago. If so, we subtract 30 points from that team’s elo rating.
% | σ | |
---|---|---|
Starting Values | 0.7078 | 0.0569 |
Excluding Early Half | 0.6008 | 0.0809 |
Rivalry Scoring (decay) | 0.7120 | 0.0590 |
Parity Scoring | 0.7111 | 0.0624 |
Parity Scoring (light) | 0.7128 | 0.0579 |
Erratic Scoring | 0.7153 | 0.0630 |
Erratic Scoring (inverse) | 0.7140 | 0.0640 |
Erratic Scoring (improved) | 0.7160 | 0.0600 |
Exhaustion Scoring | 0.7103 | 0.0544 |
Our query was especially slow. Exhaustion scoring also didn’t improve our results. This is our last attempt to improve our prediction. Next time we will consider using a different language or set of tools. Here is the link to the Jupyter notebook. Our work was done in the very bottom snippet of code.

- by “(inverse)” here we mean that consistency implies higher k
GSOC ’23: Final Report
27 August 2023
- Student: eklipsed (Wyatt Radkiewicz)
- Organization: ScummVM
- Mentors: sev, criezy, lephilousophe, ccawley2011, and somean
- Project: Optimize ScummVM Rendering Code
- Pull Requests: 5243, 5114
So originally, the goals for the project were to optimize the pixel blending code in the rendering code for the AGS game engine in ScummVM. The problem was: I completed that goal about half way through the coding period. So me and my mentors talked and what I did after was optimize the rendering code that most other engines in ScummVM use. I used SIMD cpu extensions to net a pretty huge performance gain.
Basically, in the AGS renderer, it got a 5x improvement all around and a 14x improvement in the best scenarios. In the global rendering code for all engines to use it got a 2x improvement all around. Here are the speed up results.
The most challenging part is knowing where to start. First, you must get to know your mentors really good, eg: calls, messaging, etc. If you don’t you’ll be left alone and not knowing what to do. And second, if a coding project seems big you should take three steps. 1. Figure out where you are and the actionable steps you can take to get there. What are the big milestones you have to hit along the way? Do you need to complete something else first to efficiently implement another feature? Should you write tests first? etc. 2. Get the bare minimum code written. I know that sounds funny, but you should just get stuff working to start. This gives you plenty of time to look at your code and to step three. 3. Make your code the best code anyones seen. Now that you have 90% of the code written, you can optimize it and make it cleaner and tie up any loose ends like updating the tests, making a PR, etc.
Once again, I’d like to thank Google Summer of Code 2023, ScummVM for the opportunity to work on a project like this and learn so much. And I’d like to thank my mentors for helping me when I was stuck, and teaching me how to work in a team.
Here are some pictures of funny glitches during the coding period:

I think this was the first picture I took. Here is what the game “Kings Quest 2: AGDI” looks like with only 32bit pixel graphics blitting (I didn’t implement 16bit pixel formats yet here)

Same build as the one above. As you can see by the water on the shore I got alpha blending working correctly, but there is some off by one error at the right of the screen where it overdraws a pixel or 2.

Yea, so when I finally did get 16bit blitting/blending working, I noticed that scaled images were being messed with a lot and well just looked completely borked.

This is probably the worst looking picture of them all. Its got the nasty off by one error and the main character looks like something is not right…