Deep Space Images

A point cloud data visualisation using the image itself as database.

Introduction

The idea behind Deep Space Images came from working with raw data streams of cameras. The fact that an image basically is a container storing data made me curious about using this information as input for another purpose.

I wanted to generate something new which represents the complexity of the actual data as well as keeping the look and feel of the original reference. So I decided to generate a point cloud system constrained by the image itself as database.

Result

How it works

The basic concept is to translate two-dimensional image information into the dimensional space. An image consists of a huge array of different values. Every pixel contains a unique position and color information. Since images are two-dimensional objects, it was necessary to create a third dimension. This was achieved by mapping the grey scale value of every pixel to the pixel position on the z-axis (black == minimal value, white == maximal value).

The cube as a three-dimensional primitive consisting of several flat planes was the obvious choice as a representation for each pixel. Unfortunately, rendering that much geometry resulted in too many draw calls during runtime. I had to switch to flat planes directing to the viewer (object normals facing towards camera) to give the illusion of a cube while keeping the performance quite stable.

Pixel calculation in detail

The fundamental structure of an image is different from what we see on screen. An image is a huge row of numbers (array of integers). If you open a picture from your last holiday trip the computer splits the row into smaller pieces and stacks them underneath each other to generate a grid. The following graphic shows the process in detail.

Implementing


using UnityEngine;
using System.Collections;

public class DrawPixels : MonoBehaviour
{
    [SerializeField]
    private Texture2D InputImage;

    [SerializeField]
    private int skip = 10;

    [SerializeField]
    private float heightFactor = 10.0f, startWidth = 0.1f, endWidth = 0.1f, lineHeight = 0.1f;

    [SerializeField]
    private Material PixelMaterial;

    //-------------------------------------------------------------------
    void Start()
        {
            for (int x = 0; x < InputImage.width; x += skip)
            {
                for (int z = 0; z < InputImage.height; z += skip)
                {
                    Color currentColor = InputImage.GetPixel(x, z);
                    float currentHeight = InputImage.GetPixel(x, z).grayscale;

                    Vector3 vertex = new Vector3(x * 0.1f, currentHeight * heightFactor, z * 0.1f);
                    Vector3 height = new Vector3(x * 0.1f, (currentHeight * heightFactor) + lineHeight, z * 0.1f);
                    Vector3[] lineVertices = new Vector3[] { vertex, height};

                    GameObject Pixel = new GameObject();
                    Pixel.transform.position = vertex;

                    LineRenderer PixelRenderer = Pixel.AddComponent();
                    PixelRenderer.SetPositions(lineVertices);
                    PixelRenderer.PixelMaterialerial = PixelMaterial;
                    PixelMaterial.color = currentColor;
                    PixelRenderer.SetWidth(startWidth, endWidth);
                    PixelRenderer.SetColors(currentColor, currentColor);
                }
            }
        }
}

Conclusion

This project is just a first grasp on what it possible by working with unconventional datasets. My mindset has slightly changed since I tried to be creative with image data. For me, generating visual elements with code is a fascinating subject in the world of programming and I will definitely create and learn more about this in the future.