The Physiology of Thinking Fast and Thinking Slow

Daniel Kahneman talks about two distinct processes taking place in the brain. Thinking fast, and thinking slow. One process is internal, intrinsic, such as reflexes, visual processing etc. Another is slow and deliberate, memory recollection, mental math etc.

While reading his book, Thinking Fast and Slow, it dawned on me that in the brain we have two distinct networks that communicate with one another. A neural network which science has already established to be fast (milliseconds) and not apparently under voluntary control, and an astrocyte network which we now well know to be slow (seconds) and kludgy. We have yet to discover any underlying neural mechanics directly associated with things like slow conscious thought. Perhaps the slow astrocyte networks may have some answers. It receives input and stimuli from the fast neural network, and is then able to send signals back to it.

This requires more thought. Infact, the whole notion could likely be debunked with some carefully crafted thought experiments about cognitive properties we know to exist. Regardless it is an appealing notion. I leave it here for posterity.

Posted in Theories | Leave a comment

Dropbox Woes – Recovering Data in Bulk

To recover many recently deleted files or folders scattered across many directories, just delete the folder highest in the file system tree that contains everything you want restored. Then recover that folder through the Dropbox web app. All of the recently deleted data within that folder will also be recovered.

The Situation
I have many workstations and need my files synced across them all. I’ve used many tricks in the past from rsync scripts, to mounted file systems. Dropbox’s syncing services allowed me to take my hands off of the syncing and put them back onto my research. However, there were some problems with how I used Dropbox. I partition my hard drive so I can have multiple OS on one machine for my different needs. Naturally, I want my files synced across each partition. Unfortunately I had partitioned my hard drive long before I installed Dropbox and overlooked a little detail…

The Problem
I always partition my hard drive into n+1 partitions where n is the number of operating systems I plan to use. The extra partition is where I store my common files that I want access to across all OS. Given that set up, I over looked the consequences of using different Dropbox apps to sync the same files and folders on the same partition. I used one OS far more than the others and between the lab, work, and home my syncing life was great. But, the one day I decided to use another OS turned into a very sad day. Since both instances of Dropbox were looking at the same file system, when the Dropbox from a different OS looked at the files and saw that they didn’t match what it had, it began deleting everything that wasn’t there when since I last ran it.

The Fix
After fretting for a while I realized the fix was quite easy. Dropbox provides a great caching and file recovery system. Just look in the folders via the Dropbox web app and you can click on recently deleted files and restore them. The problem for me was that by the time I realized my files were being deleted I had already lost thousands of them and they were scattered through out my folders (Dropbox apparently deletes randomly). The trick that saved me hours of time was noticing how Dropbox restored folders as opposed to files. When you restore a folder Dropbox doesn’t know which files were in it when it was deleted, so it just restores every file that has ever been in the folder (for the past 30 days I believe). So to recover my thousands of scattered files, I just deleted the entire folder from my local file browser and then restored it via the Dropbox web app. As expected, all of my recently deleted files were restored.

Posted in Coding | Leave a comment

Israel and Palestine – 21st Century Warfare

Sufficiently intelligent individuals know better than to get involved in politics. As a result we are left with the politicians we have today. Unfortunately, I must not be sufficiently intelligent. I have sought truth regarding the current plight between Palestine and Israel. Like my old friend Socrates, I too believe that constant questioning leads to truth.

modern day israel

The history behind their conflict can be presented succinctly as follows. In the 1500′s, the region of land in dispute was known as Palestine and was ruled by the Ottoman empire. Up until the beginning of WWI in 1914, there was no significant conflict between the Palestinians and the Jewish people [1]. To escape prosecution in Nazi Germany many Jewish people began to immigrate to Palestine. After the war, the British gained control over the land and the Jewish settlements were publicly supported with the Balfour Declaration in 1917 [2]. Up until WWII in 1947 there was a lot of pushing and shoving for power between the land’s two inhabitants. After the war, the United Nations broke the land into 2 states, Palestine and Israel [3]. Palestine opposed the resolution on the grounds that the land was originally theirs. In an effort to retake their land, Palestine rallied the support of its neighboring countries and the united forces of Egypt, Jordan, Syria, and Palestine invaded Israel in 1967. Israel defeated the united forces and during the war acquired the Palestinian land previously partitioned by the United Nations [4]. The conflict did not end at the cessation of the war. In 2007, Israel gave the land known as the Gaza strip to the Palestinians as a part of the unilateral disengagement plan [5]. So in summary, the conflict between Israelis and Palestinians began as a fight for land.

So for all practical purposes the Palestinian people were routed from their home. They made an effort to retake their land and were unsuccessful. So what is the current conflict based upon? Are they still fighting for their land? A quick search through Google regarding the current political affairs paints Israel to be the aggressors. There are many videos and links about the brutal killing and torture of Palestinian civilians by the Israelis. There is a full documentary about how the Isreali military is raiding homes and kidnapping Palestinian children for interrogation. The evidence seemed to be pretty clear. However, I found all of this to be a bit curious. For example, the Israeli military has roughly 4000 tanks and 450 aircrafts, the Palestinian forces have 0 tanks and 0 aircrafts [6]. Yet Israel ceded the Gaza strip to the Palestinians in 2007. If the Palestinians pose no threat to Israel why would they give them land? What would drive them to commit the heinous acts described in the many videos and articles on the internet? The claims just didn’t support the facts. So I began to dig deeper and what I found was a surprising insight into how technology has created a paradigm shift in modern warfare tactics.

I began by chasing down the sources of the aforementioned documentary regarding Israel’s detention centers and their treatment of non-Jewish children. Most of their facts and claims came from a UNICEF document published in 2013 [7]. The claims they made in the documentary matched what was written in the bulletin. The next step was to check the sources of the UNICEF bulletin. Turns out most of their information was obtained via word of mouth and came from a UNICEF office located in the Gaza strip, a.k.a modern day Palestine. In light of this finding, I chose to search for more credible, unbiased sources regarding the quality of life of children in Israel and Palestine. I came across two documentaries, which once contrasted, suggested the existence of an unsettling truth. The first is a documentary about the conflict in Jenin from the perspective of the Palestinians called Jenin Jenin, and the second is a documentary about the conflict in Jenin from the perspective of the Israelis called The Road to Jenin.

“Jenin Jenin” was an hour long documentary of Palestinians sharing personal testimonies about their hardships throughout the conflict. These were stories about Israeli soldiers shooting women and children, trampling elderly men with tanks, and molesting women. The first 15-20 minutes were moving, but after a while it seemed to be just a collection of emotional Palestinians telling stories about what happened. They did not present any testimonies from Israeli soldiers. War is awful. It has happened throughout all of history and no matter how it is carried out, people always suffer. While it is clear that the Palestinians have suffered, there was no evidence presented that even suggested the veracity of their claims. I felt as if I was being manipulated, as if I were being persuaded to sympathize with the Palestinians. To illustrate my point, below is a clip from the documentary where a young girl talks about her experiences. In my opinion children don’t talk like this. Perhaps I don’t know children very well, or maybe the translations are just not very good. Regardless you should watch the entire documentary and draw your own conclusions.

“The Road to Jenin”, while still being somewhat biased, at least provided interviews with people from both sides. In this documentary the Palestinians tell equally horrific stories of what went on. This time however, facts are presented demonstrating their exaggeration which supports my previous hunch. What really brought everything together for me was the interview with the Palestinian children. The things they said were appalling. There was so much hate in their voice, and you could see the determination in their eyes. The interview with the children is shown below.

For me, this was enough evidence to form some basic opinions on the matter. Firstly regarding the claims that Israel mistreats non-Jewish children. Whether they are true or not, once you put a gun in a child’s hands they are not longer a child, but a soldier. If I were in Israel’s situation where the Hamas (a terrorist organization occupying the Palestinian territory [8]) were using children as suicide bombers and undercover soldiers to kill my people and destroy my country, I would also treat the children like soldiers. Secondly, much of the information on the internet about how Israel continues to kill young Palestinian children, seems to leave out the part about how those children were first trying to kill the Israelis. This is similar to the documentary “Jenin Jenin” which seems aimed at building compassion in the heart of the viewers. With the previously mentioned military strengths in consideration, the Hamas leaders know they cannot win in combat. So it seems they are waging a different kind of war, an information war. If they can persuade enough of the world to sympathize with them, then Israel can not retaliate for fear of political backlash. A recent article by Daniel Pipes summarizes this situation excellently.

“No longer: The battlefield outcome of Arab–Israeli wars in last 40 years has been predictable; everyone knows Israeli forces will prevail. It’s more like cops and robbers than warfare. Ironically, this lopsidedness turns attention from winning and losing to morality and politics. Israel’s enemies provoke it to kill civilians, whose deaths bring them multiple benefits.”[9]

Inspired by Daniel’s article, I continued my research looking for more evidence to support the idea that the Hamas was spreading propaganda and waging an information war. I found an interview with Hamas leaders where they openly encourage Gaza citizens to “protect” the Hamas official’s buildings with their bodies. As a consequence the Israelis will be more reluctant to continue their attacks. It is tactics like this that lead to the death of Palestinian civilians and it is their deaths we continue to hear about in the media.

So in conclusion who is right and wrong? In the real world we must make decisions based on incomplete information. The point of this post is to share information that seems to be poorly represented in the media. Palestinians are certainly entitled to feel robbed by the Israelis for occupying what was once their land. However, the UN clearly resolved to divide the country into 2 states and it was the Palestinians who attacked Israel. Does the loss of their land 40 years ago warrant strapping bombs to children? According to western standards – not likely. However the conflict is not occurring in the West and we must keep that in mind when forming our opinions. What is most important is that people have access to as many sides of a story as possible when drawing their own conclusions.


  1. Smith, Charles D. Palestine and the Arab-Israeli conflict:[a history with documents]. Bedford/St. Martin’s, 2010.
  2. Balfour, Arthur. The Balfour Declaration. 1917.
  3. “A/RES/181(II) of 29 November 1947″. United Nations. 1947. Retrieved 11 January 2012.
  4. Shlaim, Avi (2012). The 1967 Arab-Israeli War: Origins and Consequences. Cambridge University Press. p. 106. ISBN 9781107002364.
  5. Steven Poole (2006). Unspeak: How Words Become Weapons, How Weapons Become a Message, and How That Message Becomes Reality. Grove Press. p. 87. ISBN 0-8021-1825-9.
  6. “The Institute for National Security Studies”, chapter Israel, 2010, [1] September 20, 2010.
  7. UNICEF. Children in Israeli Military Detention Observations and Recommendations. Bulletin No. 1: October, 2013.
  8. “Country reports on terrorism 2005″, United States Department of State. Office of the Coordinator for Counterterrorism. U.S. Dept. of State Publication 11324. April 2006. p 196
  9. Pipes, Daniel. “Why Does Hamas Want War?” July 11, 2014.
Posted in Nonrandom Thoughts | Tagged , , , , , , , | Leave a comment

Artificial General Intelligence: Concept, State of the Art, and Future Prospects

Ben Goertzel, Founder of OpenCog, presents a high level summary of the Artificial General Intelligence (AGI) field in his 2014 review article [1]. Here I summarize the paper and then share my conclusions.


The paper can be broken into 5 primary sections.

  1. First he presents the core concepts behind AGI.
  2. He then attempts to unravel the complexities of understanding and defining general intelligence.
  3. Next a careful consideration of projects in the field yield a succinct categorization of modern AGI methodologies. This is the meat of the paper and the pros/cons analysis for each categorization is particularly elucidating.
  4. Then many robust graphs and systems modelling structures which underlay human-like general intelligence are presented.
  5. Lastly a consideration of metrics and analysis methods is performed.

In less details, the AGI field encompasses all methodolgies, formalisms, and attempts at creating or understanding thinking machines with a general intelligence comparable to or greater than that of human beings. As can be seen from the previous sentence this is a difficult concept to delineate. In light of this, Goertzel presents many qualitative AGI features that roughly describe the purpose and direction of the field. These features are believed to be accepted by most AGI researchers. After some hand waving he presents what he calls, the Core AGI Hypothesis.

Core AGI Hypothesis: The creation and study of synthetic intelligences with sufficiently broad (e.g. human-level) scope and strong generalization capability, is at bottom qualitatively different from the creation and study of synthetic intelligences with significantly narrower scope and weaker generalization capability.

Goertzel ensures his readers that this hypothesis is widely accepted by “nearly all researchers in the AGI community”. This contrasts what has come to be known as narrow AI, a term coined by Ray Kurzweil. Narrow AI is synthetic intelligence software designed to solve specific, narrowly constrained problems [2]. A key feature of AGI which needs to be elaborated is the notion of general intelligence. Many approaches to defining and explaining what it means to be “generally intelligent” are proposed. After considering Psychological and Mathematical characterizations, adaptation and embodiment approaches, and cognitive-architectures, Goertzel admits that no widely accepted definition exists. This, we will see, is a recurring theme amid the AGI community.

After the high level introduction to the scope of AGI a succinct categorization of the mainstream AGI approaches is presented. Goertzel partitions the field into 4 categories.

  • Symbolic

    The roots of the symbolic approach to AGI reach back to the traditional AI field. The guiding principle for all symbolic systems is the belief that the mind exists mainly to manipulate symbols that represent aspects of the world or themselves. This belief is called the physical symbol system hypothesis.

    Symbolic thought is what most strongly distinguishes humans from other animals; it’s the crux of human general intelligence. Symbolic thought is precisely what lets us generalize most broadly. It’s possible to realize the symbolic core of human general intelligence independently of the specific neural processes that realize this core in the brain, and independently of the sensory and motor systems that serve as (very sophisticated) input and output conduits for human symbol-processing.

    While these symbolic AI architectures contain many valuable ideas and have yielded some interesting results, they seem to be incapable of giving rise to the emergent structures and dynamics required to yield humanlike general intelligence using feasible computational resources. Symbol manipulation emerged evolutionarily from simpler processes of perception and motivated action; and symbol manipulation in the human brain emerges from these same sorts of processes. Divorcing symbol manipulation from the underlying substrate of perception and motivated action doesn’t make sense, and will never yield generally intelligent agents, at best only useful problem-solving tools.

  • Emergentist

    The Emergentist approach to AGI takes the view that higher level, more abstract symbolic processing, arises (or emerges) naturally from lower level “subsymbolic” dynamics. As an example, consider the classic multilayer neural network which is in most ubiquitous practice today. The view here is that a more thorough understanding of the fundamental components of the brain and their interplay may lead to a higher level understanding of general intelligence as a whole.

    The brain consists of a large set of simple elements, complexly self-organizing into dynamical structures in response to the body’s experience. So, the natural way to approach AGI is to follow a similar approach: a large set of simple elements capable of appropriately adaptive self-organization. When a cognitive faculty is achieved via emergence from subsymbolic dynamics, then it automatically has some flexibility and adaptiveness to it (quite different from the “brittleness” seen in many symbolic AI systems). The human brain is actually very similar to the brains of other mammals, which are mostly involved in processing high-dimensional sensory data and coordinating complex actions; this sort of processing, which constitutes the foundation of general intelligence, is most naturally achieved via subsymbolic means.

    The brain happens to achieve its general intelligence via self-organizing networks of neurons, but to focus on this underlying level is misdirected. What matters is the cognitive “software” of the mind, not the lower-level hardware or wetware that’s used to realize it. The brain has a complex architecture that evolution has honed specifically to support advanced symbolic reasoning and other aspects of human general intelligence; what matters for creating human-level (or greater) intelligence is having the right information processing architecture, not the underlying mechanics via which the architecture is implemented.

    • Computational Neuroscience

      As it sounds, computational neuroscience is an approach to exploring the principles of neuroscience using computational models and simulations. This approach to AGI falls under the emergentist category. If a robust model of the human brain can be developed it stands to reason the we may be able to glean insight into what components of the model give rise to higher level general intelligence.

      The brain is the only example we have of a system with a high level of general intelligence. So, emulating the brain is obviously the most straightforward path to achieving AGI. Neuroscience is advancing rapidly, and so is computer hardware; so, putting the two together, there’s a fairly direct path toward AGI by implementing cutting-edge neuroscience models on massively powerful hardware. Once we understand how brain-based AGIs work, we will likely then gain the knowledge to build even better systems.

      Neuroscience is advancing rapidly but is still at a primitive stage; our knowledge about the brain is extremely incomplete, and we lack understanding of basic issues like how the brain learns or represents abstract knowledge. The brain’s cognitive mechanisms are well tuned to run efficiently on neural wetware, but current computer hardware has very different properties; given a certain fixed amount of digital computing hardware, one can create vastly more intelligent systems via crafting AGI algorithms appropriate to the hardware than via trying to force algorithms optimized for neural wetware onto a very different substrate.

    • Developmental Robotics

      Infants are the ultimate scientists. They use all of their senses to interact with their environment and over time create a model of their perceived reality. It is argued that general intelligence arises from “the brain’s” constant interaction with its surroundings and environment. Developmental robotics attempts to recreate this process.

      Young human children learn, mostly, by unsupervised exploration of their environment – using body and mind together to adapt to the world, with progressively increasing sophistication. This is the only way that we know of, for a mind to move from ignorance and incapability to knowledge and capability.

      Robots, at this stage in the development of technology, are extremely crude compared to the human body, and thus don’t provide an adequate infrastructure for mind/body learning of the sort a young human child does. Due to the early stage of robotics technology, robotics projects inevitably become preoccupied with robotics particulars, and never seem to get to the stage of addressing complex cognitive issues. Furthermore, it’s unclear whether detailed sensorimotor grounding is actually necessary in order to create an AGI doing human level reasoning and learning.

  • Hybrid

    In recent years AGI researchers have begun integrating both symbolic and emergentist approaches. The motivation is that, if designed correctly, each system’s strengths can ameliorate the other’s weaknesses. The concept of “cognitive synergy” captures this principle. It argues that higher level AGI emerges as a result of harmonious interactions from multiple components.

    The brain is a complex system with multiple different parts, architected according to different principles but all working closely together; so in that sense, the brain is a hybrid system. Different aspects of intelligence work best with different representational and learning mechanisms. If one designs the different parts of a hybrid system properly, one can get the different parts to work together synergetically, each contributing its strengths to help over come the others’ weaknesses. Biological systems tend to be messy, complex and integrative; searching for a single “algorithm of general intelligence” is an inappropriate attempt to project the aesthetics of physics or theoretical computer science into a qualitative different domain.

    Gluing together a bunch of inadequate systems isn’t going to make an adequate system. The brain uses a unified infrastructure (a neural network) for good reason; when you try to tie together qualitatively different components, you get a brittle system that can’t adapt that well, because the different components can’t work together with full flexibility. Hybrid systems are inelegant, and violate the “Occam’s Razor” heuristic.

  • Universalist

    The univeralist approach leverages a principle employed by many creative designers and inventors. Instead of coming up with an idea that satisfies all of a problems inherent limitations, one “dreams big” and develops elaborate, even unrealistic ideas, and later simplifies them to fit within the confines of the proposed problem. In regard to AGI, the so called universalist approach, aims at developing ideal, perfect, or unrealistic models of general intelligence. These models and algorithms may require incredible power, even infinite power to be employed. In summary universalists might argue that one should not limit their creativity by any imposed constraints.

    The case of AGI with massive computational resources is an idealized case of AGI, similar to assumptions like the frictionless plane in physics, or the large population size in evolutionary biology. Now that we’ve solved the AGI problem in this simplified special case, we can use the understanding we’ve gained to address more realistic cases. This way of proceeding is mathematically and intellectually rigorous, unlike the more ad hoc approaches typically taken in the field. And we’ve already shown we can scale down our theoretical approaches to handle various specialized problems.

    The theoretical achievement of advanced general intelligence using infinitely or unrealistically much computational resources, is a mathematical game which is only minimally relevant to achieving AGI using realistic amounts of resources. In the real world, the simple “trick” of exhaustively searching program space until you find the best program for your purposes, won’t get you very far. Trying to “scale down” from this simple method to something realistic isn’t going to work well, because real-world general intelligence is based on various complex, overlapping architectural mechanisms that just aren’t relevant to the massive-computational-resources situation.

The next section attempts to address the issue of metrics. For any scientific field to be viable, the field must have a means to acquire quantifiable measurements that can be compared and contrasted. Goertzel covers a wide range of proposed approaches including quantifiable, qualitative analysis, and means to measure long term, and incremental term progress towards an ideal AGI. I will address this further in my conclusions.


This review paper was my first introduction to the AGI field. While at first I was a bit disappointed by the vagueness of the field’s direction and purpose, I came to see it as an opportunity to participate in an exciting new field – burgeoning, but adolescent. The whole of AGI is vast, and as of yet has no unified direction or purpose. Therefore acquiring a “forest level” view is difficult and would require studying many different “trees”. What I found most valuable in the paper was the succinct categorization of the fields approaches, giving the reader a decent view of the forest.

As a scientist from a mathematical background I find definitions and metrics to be very important. Goertzel did an excellent job illustrating how difficult it is to consolidate the field of AGI into a succinct definition. Researchers have varying opinions of what the field’s purpose is and what it is they’re working towards. It is only natural then that the field lacks any formalized metrics to measure progress. How can one measure progress if they don’t know what it is they’re working towards? While it is valid to criticize this lack of formalism, there are some who dismiss the field as a “wild goose chase”. Personally, I find this to be an overly harsh censure. Even AGI’s more developed sibling fields such as neuroscience lack the scientific capital to define what general intelligence is.

Recently, some governments have made neuroscience research a larger priority in their budgeting. As a consequence we can hope this increase in scientific vigor will bring us closer to understanding the brain, general intelligence, and as a result, ourselves.


  1. Goertzel, Ben. “Artificial General Intelligence: Concept, State of the Art, and Future Prospects.” Journal of Artificial General Intelligence 5.1 (2014): 1-46.
  2. Kurzweil, Ray. The singularity is near: When humans transcend biology. Penguin, 2005.
Posted in Literature | Leave a comment

Get Good Grades, Learn Science, Be Cool

Math and computer science can seem difficult, but if it seems difficult to you then it just hasn’t been presented in a way tailored to how you think!

Math Tutoring

Let me show you how fun and easy it can be. I have my B.S. with honors in mathematics, a Masters in computational mathematics, and I’m working on my PhD in scientific computing with a focus in computational neuroscience and learning. I’ve been tutoring for over 6 years and I have great reviews from my students.

Check out this video of me talking about matrices and linear algebra in computer graphics, CPALMS Perspectives – Matrices in Computer Graphics


If you need help in any college level math or computer science courses, contact me right away!

Posted in Uncategorized | Leave a comment

Configuring an AWS Ubuntu Server Instance

Configuring the Server After connecting to your newly instantiated server, you will want to configure it. The first thing I always do is create an account for myself, and then disable the public/private key authentication requirements for SSH tunneling. Be wary about whether you choose to do this as well, RSA authentication is a far superior security measure to username/password combinations. I however do not have any sensative information on my servers, I use many different computers and creating key/value pairs for all of them is a pain, and lastly my passwords are strong.

To create a new user you will need the adduser command.

sudo adduser username

After the above command, enter the relevant information and continue. To ensure that username has been added to the list of users you can examine the last line of the /etc/passwd file.

tail /etc/passwd

Next, you should add your new user to the super user group, sudo. The usermod command with the -a and -G flags will get the job done.

sudo usermod -a -G sudo username

Here -G says to add the user, username, to the group, sudo, and the -a flag ensures the group is appended to the list of groups the user currently belongs to. This ensures any previous groups are not over written.

Once I’ve got my new account made, I’d like to be able to log in to the server simply by specifying my username and password. The can be taken care of by changing the ssh daemon running on the server. Edit three lines in the /etc/ssh/sshd_config file…

RSAAuthentication yes –> RSAAuthentication no
PubkeyAuthentication yes –> PubkeyAuthentication no
PasswordAuthentication no –> PasswordAuthentication yes

Then you will just need to restart the ssh daemon.

sudo service ssh restart

Now you can log into your server with the usual approach…

ssh username@

When you don’t yet have a domain name set aside for your server you will always need to reference it by its IP address. This is uncool. To make life easier I add an entry to the /etc/hosts file on my local machines.

Adding the above line to the /etc/hosts file will allow you to access your server located at with the alias via ssh, web browsers, and more. i.e.


This makes things easier when setting up virtual hosts on the instance’s apache web server.

Lastly you want to get your ubuntu verion up to snuff with all the latest security patches and updates. Run a final update/upgrade to get that underway

sudo apt-get update && sudo apt-get upgrade

In most cases you’ll be doing development, coding, and networking. You are going to need to install some software from Ubuntu’s repository to get working. Below are a few packages that I recommend for general use.

sudo apt-get install build-essential git cmake

Posted in Coding | 1 Comment

Launching an AWS Instance

Reaching into the Cloud

We will be configuring a virtual server to provide data aggregation and visualization services. There will be many posts in this series, but we must begin by initializing our Amazon Web Services (AWS) instance. First choose to “Launch Instance” from the EC2 section of your AWS console.

Step 1: Choose AMI

We will be using the Ubuntu Server 12.04.3 LTS – ami-6aad335a (64-bit) Amazon Machine Image (AMI). Choose an appropriate image for your needs.

Step 2: Choose and Instance Type

As for the Instance Type, we will be launching a prototype for testing. If this is your first time using AWS, a micro instance should be available in the free tier. A micro instance will be sufficient for our testing needs.

Step 3: Configure Instance

The default settings should be sufficient for most needs. New AWS users may wish to check the “protect against accidental termination” box. Sometimes if one is unfamiliar with the AWS interface they may accidentally delete or terminate an instance. Checking this box requires the user to remove termination protection before this instance can be terminated.

Step 4: Add Storage

In most all cases web services will be accumulating, referencing, and manipulating, data. It is not wise to store this important data on the server itself for if the server crashes your data is lost as well. AWS provides a service called Elastic Block Storage designed to ameliorate this issue. In this step you can attach an arbitrarily sized storage space to your instance and later mount it wherever you need.

Step 5: Tag Instance

Tagging enables you to conveniently label your instance. This is most useful when you have many instances in different groups with different purposes. For your first instance a simple “Name” = “Webserver” key value pair should be sufficient.

Step 6: Configure Security Group

This is a very important step. The security group controls which ports should be open to the public. You can consider it a watered down iptable. Each case will have different needs, but in general you’ll want to have ports 22, 80, and 443 available. These will provide you with access to SSH (22), HTTP (80), and HTTPS (443).

Step 7: Review Instance Launch

Now just review your configurations and when you’re ready launch your new instance. At the launch of your new instance, you will be asked to use or create a security key-pair. You will need this to access your device, make sure you download it to a safe place.

Connecting to your new Instance

Now to connect to your new instance you’ll need to use an SSH client that can employ RSA key value authentication. Linux users can just use the ssh command with the -i flag to specify the local key downloaded in the previous step. Each instance will have a default root account that you must log into before you can create any users. For our instance it is ubuntu. Lastly you must get the public ip address of your instance. You can see what this is by clicking on the “instances” tab of the EC2 section within your AWS console. For example, if my key were located at “/path/to/key.pem”, I were using the ubuntu instance, and my new instance’s ip address was, I would connect to it with the following command.

ssh -i /path/to/key.pem ubuntu@

Check out the next post

Configuring an AWS Ubuntu Server

to learn how to create accounts, edit privileges, and configure your new instance.

Posted in Coding | Leave a comment

Enter OpenTiles

Earth Tiles 8:00am April 20th, I enter the warehouse known as Making Awesome, Tallahassee Florida’s own makerspace. They were one of the 75 locations around the globe chosen to host the world’s largest collaborative 2 day hackathon, The 2013 International Space Apps Challenge. I see a friend of mine at his laptop amidst the tables, wires and people. I approach him and after the usual greetings we being the following dialogue.

Nathan: “So what project are you thinking of working on?”
Olmo: “I have no idea… I was just going to find some people working on a cool project and join them.”
Nathan: “Wow, I had the same plan…”
Olmo: “Well it doesn’t look like any of the groups here are working on a cool project.”
Nathan: “Ya… How about we just start one ourselves?”

And that’s exactly what we did.

We developed a solution to the Earthtiles challenge. Both Olmo and I work at the Center for Ocean-Atmospheric Prediction Studies, simply known as COAPS, and we’ve collected a good deal of experience processing satellite data, which was the underlying principle of the challenge. Shortly after we got started, a new participant approached us. It was Samuel Rustan, an Electrical Engineering student who had lofty goals of working with people on one of the cubesat challenges. After we explained our project to him, he continued perusing. I think he quickly realized there was no one at our local branch working on the cubesats, and he got a long with Olmo and I pretty well, so he came back and asked if he could join our group. We gladly welcomed him. His programming prowess was humble, but his enthusiasm was unrivaled.

We worked through the night and come Sunday evening, and come the judges to our station, we presented our work. They seemed impressed, and we were content with our presentation. After their lengthy deliberation (which was in the room filled with the cookies and snacks), they emerge to announce the winners. Our project was announced last along with a very cool board game project as the 2 which will continue to global judging.

Our Project – OpenTiles

Our project page, OpenTiles, was finished last night and will be the medium which presents our project to the international judges. Judging will end on May 22nd where the 5 award winning projects will be announced. The 5 awards are…

  • Best Use of Data – The solution that best makes space data accessible or leverages it to a unique purpose / application.
  • Best Use of Hardware – The solution that exemplifies the most innovative use of hardware.
  • Galactic Impact – The solution that has the most potential to significantly improve life on Earth or in the universe.
  • Most Inspiring – The solution that captured our hearts and attention.
  • People’s Choice – Determined from a public voting process facilited through the website.

If we have any chance of winning, it will be the Best Use of Data award. Our project is very specialized, it’s unlikely that most laypeople will understand our project so the People’s Choice award looks improbable… Unless all of my altruistic readers vote for our project on the SpaceApps website!

Posted in OpenTiles | Leave a comment

Ubuntu gets Commercial Grade Video, Audio, and Games

As an avid open source user and unabashed Ubuntu advocate I’ve been politely coercing people to use Ubuntu/Linux products for years. I’m sure we’ve all heard the most common rebuttals, “I’ve got nothing against Linux, but…”

  • “There’s no audio or video editing software.”
  • “Most of my favorite games won’t run on Linux.”
  • “I can’t edit my documents and spreadsheets for work.”
  • “No one really uses it.”

The list goes on and on… While the majority of the excuses are just plain silly, many of the others had been “patched” with the advent of WINE. Unfortunately, most of the users who would actually need it don’t know about it. But recently the 2 leading excuses have suffered a brutal defeat in the past few months and days. While the war is not over, today we can all be proud Ubuntu and Linux users.

04/30/2013 – The Beta release of a Hollywood quality video/audio editing suite for Ubuntu…


02/14/2013 – The long awaited arrival of Ubuntu’s future gaming pride…


May all of our futures be rich with Ubuntu software.

Posted in Ubuntu | Leave a comment

Registering Two Point Clouds

I finished this project last year and it’s been ‘dying’ to be posted. My goal was to take point clouds obtained from two Kinects and register them into one coordinate system. After the registration process was completed, the new point cloud would be much more robust with many of the obstructed blank spots filled in.

The project follows a simple algorithm.

  1. Use libfreenect to obtain point clouds from both Kinects
  2. Use OpenGL to display the point clouds in an interactice virtual environment
  3. Use OpenCV to display the RGB streams for the user to select correspondences
  4. Calculate the transformations using Procrustes Analysis and the correspondence matrices
  5. Apply the translation and rotation to the point clouds visualized in OpenGL

The implementation of this algorithm can be found here.

After a few weeks of stagnant development with the project, I made a bet with my friends and advisor that I could finish the project in one weekend before I left for a vacation. Below is the video which resulted from my sleepless weekend hackathon.

After the project was finished. I used it to complete what is called an “Honor’s Thesis” here at my University. It is an undergraduate research project, which once defended successfully allows the student to graduate “with honors” on their diploma. Looking back on the thesis now I would have done things differently – but isn’t that almost always the case. None-the-less it was a mile stone in my life and it is my work.

Let me know if you’d like a copy of it and I’ll be happy to send it to you!

Posted in 3D Scene Reconstruction | 20 Comments