18 Jun

Thrustlikes

Probably my most favorite video games are thrustlike games – as in roguelikes begat by Rogue (ADOM, Nethack, Angband, Moria et al.). What I mean with a thrustlike is a game derivative of Asteroids, Lunar Lander, Gravitar and Spacewar that feature a simple physics model, a rotating spaceship and a button for thrust. Hence, “thrustlike” and not “gravitarlike”. Also, mostly because I really like Thrust.

Here are some of the games I consider the best examples of this genre.

Thrust (1986)

What I think Thrust does that makes it so much better than Asteroids and other household names is that it takes the simple rule of motion and inertia, and adds another basic concept, counterforce. In Thrust, you not only maneuver the ship through narrow caves but you also have to lift a heavy load (a “klystron pod”) that swings under the ship. And, in most cases the roles are reversed when the heavy pod transfers its excess kinetic energy to your stationary ship, sending you both spinning towards the nearest wall. However, the whole gameplay doesn’t feel as random as it is with many modern physics based games that often require luck or endless trial and error. You are always in charge of the physics, if you are a good enough pilot.

Another thing that I like about Thrust is that the player is easily able to skip a few levels by activating the self-destruct sequence of the planet by shooting the power plant a few too many times. Why I like it is that in a way you feel like you cheated the game (although, you also lost a lot of bonus points). However, without a doubt this was added to make it easier to quickly get to and practice the later levels, it’s a very nice touch it’s built in the game world. It is also possible to get hefty bonus if you manage to both fetch the pod and also destroy the power plant.

Thrust has a reputation for being hard but I can’t really agree with that. The game has a handful of levels and the regular levels are quite easy to finish when you get the hang of it. However, after that the gravity is reversed and the levels loop from the beginning and after a successful completion, the walls turn invisible. And then I guess the game goes reversed-invisible1.

Another proof of the game’s excellence is that it’s probably one of the most ported (officially or not) games, when it comes to radically different platforms. There are equally playable versions for the BBC (also the original version), the C64 (with a rocking Hubbard tune), the 8 and 16-bit Ataris and unofficial ports for Vectrex and Atari VCS2. There also is a C rewrite of the game engine that is open source, which I guess ensures the game can be ported to any semi-modern platform.

There apparently was a sequel, that wasn’t so well received (proving more does not equal better) and I have never seen it.

Gravitar (1982)

Gravitar, on the other hand, does much of the same. Thrust (1986) was obviously very influenced by Gravitar (1982)3, which was probably influenced by Asteroids (1979) and Lunar Lander (1973, 1979). Both Thrust and Gravitar have planets to visit, fuel to be beamed up, enemy bunkers that shoot back and, of course, gravity. But to me, Gravitar doesn’t have as crystallized gameplay as Thrust. Thrust is as solid as, uh, Gravitar’s vectors are, umm, sharp.

https://www.youtube.com/watch?v=VZYW62S6TP0

What’s pretty neat in Gravitar is that you can select in which order you play the levels, the levels are simply presented as planets in a solar system and you have to fly the ship avoiding the star that is in the center of the solar system. This also gives another challenge (or a possibility to optimize your gameplay) since the star’s gravity can be exploited to save fuel. Each level also is quite different from the other levels.

Oids (1987)

Another game I really liked was Oids. It combines yet more favorites of mine with Thrust: Choplifter and Space Taxi, both outside the strictest requirements of the thrustlike genre. In Oids, the klystron pods are replaced with little guys (android slaves, or ‘oids) that you have to pick up, first of course having to land the ship carefully, and take up to the mothership. Oids is so loved that the Atari ST emulator Oidemu was made just to run the game, which wouldn’t run on other emulators — and Oidemu subsequently didn’t bother supporting any other games.

The saving the little guys idea is lifted from Choplifter (get it?) in which you use a chopper to demolish prisons and then land nearby and wait for the little men to climb aboard, making multiple runs as the chopper can carry a limited amount of men. The gameplay is much speedier in Oids, you constantly have to dodge incoming bullets and homing missiles and the levels generally are much more open than in Thrust or Gravitar. In all, Oids feels like very natural progression from Thrust.

Maybe one factor that made the game so special was the fact it shipped with a level editor.

Space Taxi (1984)

If we bend the rules what makes a thrustlike, Space Taxi is more than a worthy addition to the genre. Especially so, if you think Lunar Lander fits in the genre — both games have similar controls (thrust up, down and to the sides) and also the common goal of landing quickly yet safely.

There was a nice clone of Space Taxi on the Amiga featuring a caveman flying a stone-age version of a helicopter (think Flintstones). The game was called Ugh! and it was later ported to the PC (it’s great) and C64 (it’s crap).

While Ugh! looks gorgeous and is fun to play, what Space Taxi does infinitely better is that every level in Space Taxi is very different from the other levels, and that is not just the layout that changes (on top of that Ugh! uses the same tiles over and over, however nicely pixeled they might be). The rules of the game are often deviated slightly. For example, one level has a beanstalk that slowly grows leaves on which you can land (and the passengers start appearing on) and others simply have disappearing walls or teleports. I guess that still makes the game quite unique, since even now games rarely have radically different levels.

There also is Space Taxi 2, which is a legit sequel but the tiny voice in my head says it can’t possibly measure up. It’s worth a check, though, as it has the same level designs as the original.

Solar Jetman: Hunt for the Golden Warpship (1990)

Solar Jetman: Hunt for the Golden Warpship is pretty much exactly what Thrust was years earlier but with updated graphics. The only difference is that in each level you have to find a different object, each being a part of the Golden Warpship and various other items which upgrade the ship you pilot around. There’s also a bit of Oids in the game, as it features more shooting and has enemies that follow you around the level.

The game provides welcome variation to NES games and I can see why it wasn’t a success among the younger gamers despite the great graphics (got to love the smooth rotation of the ship). The controls are smooth but as all games of this genre, Solar Jetman can be a handful. Especially, if you are looking for more straight-forward action.

The game is the successor to Lunar Jetman (which shares only half of the name and virtually no gameplay elements, except for the little spaceman that you have to get back to the mothership when your probe is destroyed, see above video) — Solar Jetman was developed by Rare who in fact were called Ultimate Play The Game who developed Lunar Jetman for the Speccy (and loads of other very successful games). Solar Jetman was also going to be released for most of the home computers of the time but the ports were canceled and only the C64 port is available to the public.

“Caveflyers”

In the mid-1990s there was a boom for so called caveflyers (from Finnish “luolalentely”) here in Finland. The caveflyer genre probably owes more to Spacewar as it’s mainly about blowing up the other players. There were dozens of similar games that all featured hotseat multiplayer action (some even had computer controlled), various weapons and levels and most importantly destructible environments.

There had been a lot of similar games by hobbyists before the boom, the single game that started it all probably was Gravity Force on the Amiga. However, I think the flood of cavefliers (in Finland, at least) was triggered by Auts and Turboraketti on the PC. I am very sure different areas had a bit different ideas what was the hottest game but at least to me, Auts was it. After that more and more clones started appearing with constantly improving quality and broadening weapon arsenal. My favorite probably was Kaboom. On the Amiga, there was a bit older similar game called Mayhem (1993) which we played a lot. There’s an abandoned PC conversion that you can find in the linked thread.

This subgenre is still alive and there are many modern caveflyers that have Internet multiplayer features and so on.

3D thrustlikes and other oddities

There have been various Zarch (1987) on the Acorn Archimedes (ported later in 1998 as Virus for the Amiga, Atari ST, ZX Spectrum and PC) features similar controls for the player ship except that there now are two rotation axes. The gameplay however features minimal interaction with the landscape as there are no caves. Later in 1999, Psygnosis released Lander for the PC that featured closed levels, cargo hauling and teleporting fuel on the lander. So, Lander pretty much is Thrust in 3D.

So, were there any games I left out that you think should be here? Do leave a comment. Also, I’d like to thank the people who uploaded gameplay videos of Thrust, Gravitar and Oids to Youtube. I had only a NES emulator with AVI output handy.

Links

Zemanta Pixie
  1. Gravitar and loads of other games, including some Pac-Man sequels, do this as well.
  2. Including scrolling and everything… even drawing the rod that connects the klystron pod to the spacecraft was a potential project killer on the VCS.
  3. Maybe the creator of Thrust saw Gauntlet (1984) on the Atari as well.
25 Apr

Blog Experiment: More SQL and Text Matching

In Viewer2, there is a feature that allows fast text searches on huge databases with tens of thousands of filenames (and associated photo captions and other meta data). Here’s how I did it.

The simple approach

The easiest way to find all filenames (the following article talks about filenames but you can use this to search for any text) matching $query (the string we are looking for) can be done a bit like this:

SELECT `filename` FROM `files` WHERE `filename` LIKE '%$query%'

Similarly, you could perform for multi-word searches by adding one test for each word you have extracted from the query string. And, you can very easily extend the syntax to include exclusions. Below is the equivalent SQL query for the query string [kitten -jpg] (filenames with the text ‘kitten’ but not the string ‘jpg’):

SELECT `filename` 
  FROM `files` 
  WHERE `filename` LIKE '%kitten%' 
    AND `filename` NOT LIKE '%jpg%'

The above works well but it can be very, very slow because it has to go through all entries in the table files to find matching rows. Speed can be improved by having queries like LIKE 'word%' but this limits the searches and in the end doesn’t give too much extra speed.

And if you are performing fuzzy searches…

Using a token-based approach

A lot of speed can be gained by splitting the filenames and the searched text into tokens and then performing the search using the tokens. I ended up using the following structure (SQLite syntax):

CREATE TABLE `files` (
  `file_id` INTEGER PRIMARY KEY AUTOINCREMENT,
  `filename` TEXT
);

CREATE TABLE `tokens` (
  `token_id` INTEGER PRIMARY KEY AUTOINCREMENT,
  `word` TEXT UNIQUE
);

CREATE TABLE `link_file_token` (
  `token_id` INT, -- Foreign key to the table `tokens`
  `file_id` INT   -- Foreign key to the table `files`
);

The idea is that each word in the filename can be represented with a single index to the `tokens` table. Thus, you can also search by a single integer instead of text comparison. You have to compare text only when generating the tokens (which is extremely helpful if you are using a comparatively expensive comparison).

First, when you insert the a file in the files table, you split the filename into words and link the file and the tokens (which may already exist in the `tokens` table). Later, when you need to perform a search, you do the same but use the generated tokens to search matching links.

The filename ‘Cute kitten picture.jpg‘ can be split in four tokens which results to four links between the token table and the file table. I also split words when the case between characters changes (splits filenames like ‘CuteKittenPicture.jpg‘) but this is up to you. In my implementation the tokens are also case-insensitive.

Searching using tokens

Here is a simple example query that finds matching files using the approach described above:

SELECT `filename` 
  FROM `files` 
  JOIN `link_file_token` USING `file_id` 
  WHERE `token_id` IN (
    SELECT `token_id` 
    FROM `tokens`
    WHERE `word` LIKE '%$query%'
  )

The subquery first finds all `token_id`‘s that match the query and then the main SELECT finds all filenames that have that `token_id` linked to them. Obviously, this restricts the query to simple OR queries. This is because the text matching is done in one word scope instead of a longer string (using the above filename as an example, a single word can’t match both ‘cat’ and ‘cute’).

Here is how you would do a query like [cat OR dog]:

SELECT `filename` 
  FROM `files` 
  JOIN `link_file_token` USING `file_id` 
  WHERE `token_id` IN (
    SELECT `token_id` 
    FROM `tokens`
    WHERE `word` LIKE '%cat%' OR `word` LIKE '%dog%'
  )

To perform an AND in the search query you need a query like this:

SELECT `filename` 
  FROM `files` 
  JOIN `link_file_token` ON `files`.`file_id` = `link_file_token`.`file_id`
    AND `token_id` IN (
      SELECT `token_id` 
      FROM `tokens`
      WHERE `word` LIKE '%kitten%'
    )
  JOIN `link_file_token` ON `files`.`file_id` = `link_file_token`.`file_id`
    AND `token_id` IN (
      SELECT `token_id` 
      FROM `tokens`
      WHERE `word` LIKE '%cute%'
    )

In the above example, using a natural JOIN handles the removal of all rows that only match one of the two searched words. Don’t forget that you can do the same in many ways. For example, you can use INTERSECT (do mind though that if you intersect two result lists of strings, it might be much more resource demanding than intersecting two result lists of integers).

Evaluation order

Keep in mind that if you combine the above AND and OR queries (by adding a OR `word` LIKE… for each OR’d word), it will not work as expected. Boolean algebra states the correct order to evaluate the two operators is AND first and then OR (the relation is similar to multiplication and addition, which may help you remember that). Above, it will work the opposite, the OR is evaluated before the AND.

If you restructure the query so that it, for example, returns indexes to the files table, you can implement correct evaluation order by adding a UNION for each OR term. In effect, you first use the above query to find each pair of terms AND’d and then combine all those queries with UNION.

For the query [cat cute OR dog drooly], the resulting SQL query will look something like this:

SELECT `filename` FROM `files` JOIN ... -- cat AND cute
UNION
SELECT `filename` FROM `files` JOIN ... -- dog AND drooly

Further ideas

To implement a NEAR operator, or when it is important that the searched words are in a certain order (in short strings like filenames it’s usually not at all necessary), you could add a field in the file-token links that tell the index of the word in the original string. When you have searched for the results using the above method, you then score the results based on the distance between the tokens.

E.g. if you are looking for [cute kitten], and a result has the text ‘Here is a cute dog. It is not a kitten’, the indexes for the words ‘cute’ and ‘kitten’ will be 4 and 10. Thus, the result should have a worse score than some other piece of text with the exact word order.

Notes

In Viewer2, the exact implementation doesn’t use subqueries like described above. Instead, I first generate a longer query that has the needed `token_id`’s and other stuff filled in (a comma separated list of integers in place of the SELECT subquery). You might consider this to gain some more speed but do experiment before you optimize.

Take note that even SQLite has a similar feature called FTS2. In theory, it should provide most if not all of the above. My example is still useful if you need more control over how the text is tokenized and compared (and I think in the case of FTS2 enabling full text searching requires double the storage, if you don’t want to store all the text data in the full text search table).

As always, research before you start doing something you think someone probably already did.

22 Apr

Blog Experiment: Desaturating and resizing images

More from the muse called Referer:.

Resizing images

Resizing an image is an interesting subject. There are many ways to do it and you can also do the same thing in very different methods. Quickly summarized, I’d say resizing an image can be categorized like this:

  • Enlarging
    • Resampling
      • Nearest neighbor (blocky)
      • Linear interpolation (blurry)
      • Spline interpolation (e.g. cubic, lanzcos etc., blurry, sometimes exaggerates contrast and makes edges stand out too much)
    • Super-resolution (“CSI” style zooming)
  • Shrinking
    • Resampling
      • Nearest neighbor (blocky)
      • Linear interpolation (average)
      • Spline interpolation (sometimes better than average, especially when the resolution isn’t for example, halved or divided by three)
    • Re-targeting (“content aware resizing”, keeps important details unchanged)

Now, for usual resizing there are many libraries that do it for you. There is no point recreating the same algorithms over and over, considering you will most likely use either bi-linear interpolation, a spline algorithm or some special algorithm. When using SDL, I have used SDL_gfx to resize images both larger and smaller. It does most basic interpolation, for slight resizing it should be good enough.

Demonstration of the 2xSaI interpolation, with nearest-neighbor scaling on the left for comparison. Another free piece of code for resizing is the Super Eagle algorithm which was specifically created for resizing emulator (Snes9x) graphics larger: it doesn’t blur edges and works quite well removing “jaggy” lines. It should be available for [your preferred graphics library], it has been adapted to so many projects that it is only very probable that there is an implementation floating around for you to steal.

As for commercial implementations of super-resolution, this looks pretty awesome. As does this online bitmap image to vector image converter (vector images do not suffer of the loss of sharpness when enlarged).

Do keep in mind that you can also use the video card of your computer to scale images: most (pretty much all) computers have some 3D features which means they can be used to quickly draw an image on the screen in any dimensions you like, even if you don’t need interactivity. I use this method in Viewer2 and I found it easy to implement. Also, you can shrink images the same way, usually a graphic library will have a function that generates mipmaps which give very nice results when the drawn image is smaller than the original.

And now the reason why I started writing these (lame) blog entries: image re-targeting or seam carving. My implementation works somehow (mainly with cartoon style images with as little color gradients as possible) but there are many other implementations that are good.

Desaturating (greyscaling) images

Desaturating (i.e. making color images black and white) is a subject that interests a portion of the people who find their way on my site. While this is a trivial problem to solve in any programming language, there actually are some quirks to it. Here goes.

This is the easiest (and probably fastest) method:

Grey = (R + G + B) / 3;

However, when speed is not important or if you simply are more pedantic than you are lazy, a more accurate mixing ratio is as follows:

Grey = 0.3 * R + 0.59 * G + 0.11 * B;

This is because the human eye isn’t equally sensitive to all three color components. While those three ratios are the most often quoted, each eye probably needs a slightly tweaked mix. But the main idea is that we are able to see green much better than red, and red a bit better than blue.

Lossy image compression uses this fact to dedicate more bits to the colors to which the eye is more sensitive. Also, back when 24-bit resolutions weren’t quite yet available, the 15 or 16-bit modes often dedicated less bits to the blue component.

Worth noting is that if you want the correct algorithm to be fast, you will probably need to ditch the floating point operations and do the conversion with integers a bit like this:

Grey = (77 * R + 151 * G + 28 * B) / 256;

In any proper language and with a decent compiler, the above should compile to code that doesn’t have a division but a bit shift for speed (2^8 = 256, shifting right by 8 bits is the same as dividing by 256). Though, if you do that, make sure your variables are wide enough so they won’t overflow.

Do keep in mind any modern processor has vector processing capabilities. That is, you desaturate multiple pixels in one operation. And, that in such case it could be possible to do the conversion quite fast with the first formula that uses floating points. Then again, if you are able to use such features, you probably didn’t need to read this.

20 Apr

Blog Experiment: Comparing text and SQL

A lot of visitors come from search engines to my page about a little hack I made that acts as a very thin layer between text files and SQLite. Most of the search queries are something akin to “compare text files in SQL“, “SQL text compare” or other SQL and text comparison related searches. So, I’m going to write what I know about the subject.

Comparing text in SQL

Most implementations of SQL can do text comparison in a variety of ways. It is safe to assume any two text strings can be compared with the equal operator and the LIKE operator. The difference between the two basic operators is that the equal operator usually is case-sensitive by default (see below before assuming this is always true) while LIKE is case-insensitive. LIKE also has a nice feature: wildcards. You can check if a string contains a shorter substring.

1   SELECT `name` FROM `table` WHERE `name`='Joe Doe';
2   SELECT `name` FROM `table` WHERE `name` LIKE 'Joe Doe';
3   SELECT `name` FROM `table` WHERE `name`='% Doe';

All the above queries return the rows in which the field `name` (1) is exactly ‘Joe Doe’, not e.g. ‘joe DoE’, (2) the same as (1) but case-insensitive and finally (3) ends with the string ‘% Doe’ (case-insensitive).

As you can see, the percent string acts as a wildcard which can be substituted with any sequence of characters – much like the asterisk (*) when talking about e.g. file paths. The underscore (_) character can be replaced by any one character, this is comparable to what question mark usually does with filenames and paths. The wildcard characters can be different between implementations.

The case-sensitivity can be changed. For example, the equal operator can be case-insensitive if the collation is set to a case-insensitive method. Similarly, the LIKE operator can be set to be-case sensitive. See your SQL implementation manual how to do this.

When it comes to speed, you should always use the equal operator before LIKE. In cases like the below query it is impossible to benefit from an index:

SELECT `name` FROM `table` WHERE `name` LIKE '%Doe%'

Why? Because the substring ‘Doe’ can be anywhere in the compared string. However, most SQL implementations can gain speed from indexes when you have a query something like this:

SELECT `name` FROM `table` WHERE `name` LIKE 'Joe%'

You can think of the speed-up as something similar to trying to find a word in a dictionary. If you have a table of contents, you don’t need to look through every page: quickly flipping pages to a known page (words starting with ‘Joe’) and then finding the word on the page is a lot faster. Again, I’m sure different databases have different limitations.

Advanced comparison

One very nice feature of MySQL is that it has a special kind of comparison built in: regular expressions. The REGEXP operator is used like the LIKE operator but the string after the operator is — a regular expression.

SELECT `name` FROM `table` WHERE `name` REGEXP '^[JM]oe Doe$'

The above example matches names ‘Joe Doe’ and ‘Moe Doe’. Regular expressions of course can be much more complex than that but they are useful when you need just a little bit more functionality than what LIKE can offer.

Fuzzy comparison

Another feature that comes in my mind is PostgreSQL’s pg_trgm module that allows fuzzy text comparison based on trigrams (i.e. word is divided into sets of three letters, ‘car’ would result in trigrams ‘ ca’, ‘car’ and ‘ar ‘ — with the spaces). As far as I remember, it comes with the basic installation so it should be simple to install. Basically, you can then compare strings that are not exactly like each other but within a certain percentage.

SELECT `name` FROM `table` WHERE similarity('Joe Doe',`name`)>0.5

The above would match all names that are relatively similar (more than 50 percent) to ‘Joe Doe’. For example, ‘John Doe’ would most likely match.

And yet again, MySQL and other DB’s could have the same feature. And I intentionally left out the phonetic (“sounds like”) comparison features that are very often built-in (AFAIK MySQL and PHP should have those included by default). This article knows MySQL’s “sounds like” operator is quite logically SOUNDS LIKE.

Text files and SQL

A lot of search queries are looking for something that combines text files and SQL. As far as I know, there is no SQL engine that writes and reads raw text files as the tables (other than my hack mentioned in the first paragraph).

However, at least what I have tooled with the idea, it can be quite useful. Bundle that with the fuzzy and regexp comparison from the bigger databases and it’s a killer idea right there.

Dumping tables into text files

Most databases come with a tool that can dump (or export) the database tables into SQL statements that can then be used on another table to duplicate the structure and the data. It is quite safe to say most bigger databases have their own command line tools that can do this, a visual database tool such as phpMyAdmin for MySQL and of course numerous desktop applications for that.

I use SQLyog for MySQL tinkering. In SQLyog it is as simple as selecting the table (or database) and then clicking DB/Table > Export as SQL Statements and soon you have a text file (or XML file or…). And doing it backwards is as simple as copypasting the text file contents and running the statements.

Checking if two tables are identical

Here’s a method for comparing two tables for identical data is to inner join both tables (returns only matching rows) and checking if the number of rows is equal to the total number of rows in a table. The idea is that we use every field to be checked as a foreign key to the other table.

SELECT COUNT(*)/(SELECT COUNT(*) FROM `myFriends`) as `equality` 
  FROM `myFriends` as a, `yourFriends` as b
  WHERE a.`name`=b.`name` AND a.`phoneNumber`=b.`phoneNumber`

Column `equality` in the query result is a percentage of how many matching lines there are compared to the total row count. The above won’t care about the order of the rows which may or may not be useful (unless you include something like an auto incrementing key in the comparison).

There, another almost useful page added to the Internets. I hope the above will be helpful at least to some random visitors.

18 Apr

A Blogging Experiment

I’m going to try something that is not entirely original but still should be useful. I have found people often come across my page when looking for something with a search engine, yet the page they land on isn’t exactly what they were looking for. Or to say it in other words, I’ll try to help random visitors to find what they are looking for (or, you could argue this is some kind of search engine optimization).

At least Syd Lexia frequently does this on his page, albeit humorously (even though he does often answer a simple question as well as it can be answered). My idea is to either write about the subjects as much as I know about them or simply point out how to find the needed information.

For example, people are often looking for image re-targeting when they click their way to my post about a crap example of the seam carving algorithm. There are much nicer examples out there. I’d like the random visitor to know that so the time is not wasted when skimming through my writings.

So, expect more useless information about things that interest the random visitor (not you).