Programming Memoirs

wachowicz

Simple Python HTTP server using sockets

I brewed a simple Python-based http server using sockets today and I wanted to share the code, along with some comments. Sure, there is the http.server class, but why not have some fun? :-) Building a fully-fledged HTTP server is a big undertaking. I focused on building a small server, supporting only the basic functionality of the Hyper Text Transfer Protocol (namely the GET and HEAD request methods).

Read the full article »

Python’s built-in container data types: categorisation and iteration

The aim of this short article is to take a look on the built-in container data types in Python 3.1. This introduction, however, is not a typical one. Typical Python data types tutorials focus on describing individual data types one by one. Here, instead, I try to describe the ways Python container data types can be grouped according to their properties. I also explain when and why some containers are iterable, why others are not. Therefore, the article explains such concepts as:

  • mutable and immutable objects,
  • hashable objects,
  • ordered and unordered objects,
  • the relationship between sequence/set/mapping types with the iterable type, and demystifies the implicit sequence iterator in custom sequence containers.

The article assumes the reader has a fundamental understanding of container types in Python.

Read the full article »

Faking anti-aliasing in CUDA graphic output

Faking anti-aliasing of CUDA content with OpenGL

Recently I’ve been working on a CUDA ray tracing application which uses OpenGL for CUDA graphics output. I was slightly annoyed by the rough, aliased, edges of my output, but I wanted to avoid implementing a fully fledged anti-aliasing solution. This short post described how to fake anti-aliasing in your CUDA output in a highly computationally efficient (and simple) manner by using OpenGL’s texture filtering.

Read the full article »

3D (depth) composition of CUDA ray traced images with OpenGL rasterized images using CUDA Driver API

Depth composition of CUDA ray traced image with OpenGL rasterised object transformation gizmos

Ray tracing is a great method of generating synthetic images. It has many benefits over traditionally used (e.g. in computer game) rasterization. Ray tracing stays great up to the moment when you need to render e.g. a line segment placed in your 3D space (which is potential occluded by other 3D objects).

Why would you want to ray trace a line or a line segment? Say you want to create some kind of transformation gizmo for you 3D objects which blends nicely into the scene, or a bounding box depicting the boundaries of an object, or include wire frame of meshes in your ray traced scene, or …  There are many potential uses.

You cannot just mathematically test for camera ray vs. line segment collision and expect the line segment to appear on the ray traced rendering. Chance of a camera ray colliding a line segment are too slim for the line segment to be visible.

You can try simulating a line by drawing a thin cylinder, or an ‘x’ made of 2 quads. But this is far from an elegant solution. Additionally, your ‘line’ stops being a mathematical line, as it suddenly has a width. Thus, it becomes thicker closer to the camera, and thinner further away from it (considering you use perspective projection).

Fortunately there is a solution for drawing lines in ray traced content. The solution involves using some some kind of rasterization based renderer, such as OpenGL to draw the lines (or any objects) separately from the ray tracing pass and then performing a 3D composition of the two images.  The problem becomes even more interesting if you obtain the ray traced image using CUDA and CUDA Driver API.

Read the full article »