Dirk de Wit
Online Portfolio.
UX Designer & Researcher
See Projects
About me
Dirk de Wit
Sign in
Sign in
Sign in


As a parttime User Experience Designer/Developer, I work on a web application used by railway engineers to shape a safer work environment. My job is to improve the user interface for a better user experience by designing a new interaction flow. In addition, I implement the front end of the web application. Technologies used are: low-fi prototyping, Balsamiq wireframes, Photoshop, JavaScript, jQuary, Ajax, PHP, HTML5, and CSS3. Located in Oisterwijk, the Netherlands.


1. Knowing the user

In order to know the user and what potential problems could be in the daily interaction with the current system I talked with users and asked them to guide me through the application. By doing this I could sometimes see struggle or inconveniences.

2. Lo-Fi prototyping

By making various lo-fi prototypes the new interaction was designed. The prototypes were discussed and tested with the users to find potential flaws or obvious problems. Improvements of the prototypes were made after discussing them.

3. Wireframes

After creating various paper prototypes a wireframe was developed and again discussed and tested with the user.

4. Implementation

When the entire new user interaction was designed, I started developing some front-end of the new application along with the other developers in the develop team.

Tools and Techniques

balsamiq html5 javascript css3 jquery php eclipse bootstrap github


Man for the job: Investigating choice architectures to support recruitment decision- making strategies in a personnel- selection system.

Various studies reported that humans have limitations in their capacities to process information for making decisions, such as limitations in working memory and computational capabilities. These limitations lead to selective attention, which could make decision-makers subject to be influenced by irrelevant salient information. The decision process especially becomes more difficult when there are more attributes that are difficult to trade off or when the decision-maker is unsure about the values of the attributes.

A potential domain in which decision-making involves many attributes, difficult trade-offs and where various strategies are applied is personnel-selection. Recruiters report to use various strategies for personnel-selection. They often start with a sifting strategy as a first phase to reduce the set of options, and process the resulting set in a more extensive way.

A good choice architecture can support coping with difficult trade-offs between attributes and help people make effective decisions while not simplifying the decision by focusing only on a few (irrelevant) attributes. Various choice architecture tools were investigated, such as a categorized information presentation of a list of candidates, default settings on candidates’ attributes (e.g., communication skills, experience), and equal attribute weights for each candidates’ attribute. The present research proposes a personnel-selection support system that aims to improve a recruiter’s choice satisfaction, process satisfaction, system satisfaction, and the system effectiveness through leveraging the defaults, categorization, and equal weights choice architecture tools.

Underlying the interface is a weighted-additive decision support tool that sorts the best options based on the attributes importance set by the participants. Through a process of three steps participants ‘hire’ one candidate from a set of ten candidates. In each step more information about the remaining candidates is added such as general mental ability, communication skills, and presentation. The choice architectures are evaluated through a questionnaire to measure user experience related factors such as system satisfaction, choice satisfaction, choice variety and choice difficulty.

The results show that choice difficulty can be reduced by showing the candidates in a categorized list. Participants also tend to accept default values of attribute weights, which results in fewer system interactions. This study suggests that all choice architectures influence each other and have a combined effect on system satisfaction. In line with previous research, system satisfaction positively influences choice satisfaction, which indicates that an appropriate combination of choice architectures relates to a more satisfying choice. Thus, if the system is good, users make more satisfactory choices.

View Live

Project process

1. Problem statement

During the first phase of my thesis I discussed possible topics with my supervisors, at that time mostly with Prof. Steven Weber at UC Berkeley’s School of Information, and remotely with my first supervisor Dr Martijn Willemsen of the Human Technology Interaction group at the University of Technology Eindhoven. I was particularly interested in recommender systems and did a short literature study on the topic as a preparation for the discussions. The topic changed towards personnel-selection, taking into account over-qualification of a job candidate.

In order to find out how recruiters and hiring managers perceive this problem of over-qualification I conducted a few interviews with recruiters in the Bay Area. Every interview revealed that over-qualification depends on the job and that every job is different. It was thus difficult to determine over-qualification.

Eventually after backing up the findings with more literature on the subject we decided to focus on the decision-making process of selecting the best candidate for the job out of a set of qualified candidates, taking into account trade-offs that recruiters make when evaluating various attributes of a candidate. For example, one could be over-qualified or score poorer on integrity, while scoring highest on general mental ability tests. In that case, a recruiter has to make a trade-off between the attributes of the candidate when trying to select the best hire for the position.

2. Research Aim

After reading the literature, I decided to test the decision-making process by introducing various choice architectures in the user interface. I wanted to develop a support system to support recruiters or hiring managers by making the best hiring decision. This system was a rationale weighted additive system that worked with attribute sliders to enable recruiters to set the desired importance or weight of an attribute. As a result the list of candidates would rearrange itself. Candidates either had pre set default attribute weights on a few attributes, equal weights for each attribute, or default attribute weights on every attribute that varied around the center of the attribute scale. Finally, the list of candidates was either categorized by default or shown in one list. Thirteen hypotheses were composed to evaluate the effects of the choice architectures on user experience related factors such as system satisfaction, process satisfaction, choice satisfaction, and perceived choice variety. The UX factors were measured by using the QUIS 7.0 usability questionnaire and questions adapted from Knijnenburg et al. (2012). The research question is:

RQ: “Can the presentation of candidates based on the choice architecture tools such as categorization, defaults, and equal weights improve a recruiter’s choice satisfaction, process satisfaction, system satisfaction, and the system effectiveness?”

3. Method

As described in the previous section a decision making support system was developed to test the choice architectures. This support system was built in PHP, HTML5, CSS3, jQuery, jQuery mobile, and d3.js. Since there were 3 choice architectures to evaluate the experiment had a 2 (default attributes vs. no attribute weights) x2 (equal attributes vs. no attribute weights) x2 (categorization vs. list) experimental design.

Every participant selected the best candidate for two job descriptions. One of the jobs was a software developer, while the other job was a clinical psychologist. The jobs were presented in a random order to control for a learning curve. Each job was divided into three steps. In the first step participants selected six candidates from a set of ten, in the second step participants selected three candidates from the remaining six, and in the last step a final candidate was selected. In each step more information was added to the description of the candidate and more attribute sliders were added to the interface to sort the list. In the first step only information from a resume was available to participants, while in the second step interview results were added. In the last step a general mental ability test score and an integrity test score was made available to support the participant in selecting a candidate.

Finally, after the two job cases participants filled in a questionnaire that was used to measure user experience related factors. With a factor analysis I tried to extract the UX factors from the questionnaire results. 212 participants participated of whom 140 participants finished the entire experiment including the questionnaire.

4. Results

The factor analysis was not able to distinguish between system effectiveness, system satisfaction, and process satisfaction. Trying to find these three different factors was a little too ambitious, since the factors are very closely related. The questionnaire was not distinctive enough to keep them apart, which resulted in finding one factor that we called system satisfaction.

No main effects were found for any of the choice architecture tools used in the personnel-selection support system on system satisfaction. However, the three choice architectures do influence each other, resulting in a 2-way and a 3-way interaction between defaults, equal weights, and categorization. Participants almost always report higher system satisfaction when a list is shown without categories.

In the other conditions, participants were more likely to make adjustments to attribute weights that might be less salient when the candidates are categorized, since the categorization might interfere with the feedback given when attribute weights were adjusted. In other words, when using attribute sliders while the list is categorized, the attribute weights only affect the order of the list within the categories. This could sometimes result in missing the animation that was used as feedback to make the reordering salient to the user.

The main finding in this study suggest that choice architectures influence each other and that system satisfaction might be higher when multiple choice architectures combined do not interfere with the applied decision-making strategy. For example, when defaults are combined with equal attribute weights, participants are perhaps most likely to apply a weighted additive strategy, which is compensatory. When this attribute weights situation is combined with categorization, the interaction might interfere with the decision-making strategy. This could occur since categorization is more likely to influence the decision-making strategy towards a more non-compensatory strategy. When not all attribute weights are set, participants are less likely to use a weighted additive strategy. The strategy then applied could be less compensatory. This situation is perhaps better combined with categorization. Categorization can for example be used to highlight the most important attribute. The alternative that scores highest on the secondary attributes (based on the defaults) is shown on top of the list within the category that is used for categorization.

Tools and Techniques

jquery html5 javascript css3
php mysql spss


Scaling Presence in a Vitual World with a Rasch model.

Project process

1. Problem statement

In a number of studies, conducted in the past two decades, researchers have been trying to measure a phenomenon called ‘presence’ (e.g. Witmer & Singer, 1998; Wissmath, Weibel, & Mast, 2010; Slater, 2004; Cobb, 1999; Schubert, Friedmann, & Regenbrecht, 2001). The studies conducted contributed to a debate about what presence is and how it occurs in a virtual environment (O’Neill, 2005). Presence is most often defined as a person’s subjective sensations of ‘being’ in one place or environment, even though the person is physically located in another environment (Witmer & Singer, 1998). Slater and his co-workers (2009) describe presence as the tendency to respond to virtually generated data as if it is real, and state that presence arises when real sensory data is successfully replaced by virtually generated data. The definition of presence adopted in this report is “the perceptional illusion of non-mediation”, which occurs when technology becomes transparent to the user (Lombard & Ditton, 1997). As a result the user responds as if the medium does not exist, or thus as if the virtual world were real (O’Neill, 2005). This is a natural consequence caused by human’s embodiment (Slater & Usoh, 1994; Haans & IJsselsteijn, 2012). One could only partially fall for the illusion (Slater et al., 2009; Haans, 2014), thus the technology could become transparent for just automatic or visceral responses, but not for behaviors or cognitive beliefs. Presence is often confused with immersion. However, presence is a human response to immersion and is subjective, while immersion can be objectively assessed. The technology used can be immersive, and given the same immersive system people can experience different levels of presence (Slater, 2003). Immersion and presence are, however, related to each other. To reach a higher level of presence, more immersive media technologies are required.

2. Method

87 participants (51 men, 36 women, with an average age of 27 years) participated in the experiment. Every participant was paid € 5,00 as a compensation. The participants were required to have a normal or corrected-to-normal vision. One participant, however, did not meet this requirement since the participant was blind on one eye. However, this sight impairment did not affect the trial in the virtual environment, therefore the participant was not removed from the analysis. For 9 other participants the orientation of the head mounted display caused trouble, therefore the 9 participants were removed from the analysis. Eventually, the analysis was run on 78 participants (44 men, 34 women, with an average age of 27 years; range 18 to 67).

First, the skin conductance of participants is measured while participants are standing still and just looking around to avoid high peeks in the data caused by putting on the HMD. When the skin conductance measurement showed the participant is at rest, the trial was started. Participants were asked to look straight ahead while a virtual ball was bounced to the participants at eye height in order to measure an automatic response such as evading the ball. The participant’s response was scored as either 1 or 0. The response was 1 when the participant showed a clear response in terms of body posture, by either raising an arm to block the ball or moving ones head to avoid the ball. This response was captured with camera recordings. Second, participants were allowed to freely walk around in the first small room that only contained a closed door to the next room, a bookshelf and a desk, in the meantime the skin conductance response was recorded while participants were exposed to ‘normal’ stimuli. Third, the door to the next room was opened so that participants could walk through the opening to the next room. The door was, depending on the participant’s height, too low to walk through normally. Recordings were made of how people walked through the door opening to the next room. In the second room participants could find a pit in the floor. Skin conductance of participants was continuously measured. A comparison was made between 30 seconds before the pit stimuli and 30 seconds after introducing the pit stimuli. Participants were asked to look straight ahead and make a big step into the pit. If people did not want to step in the pit, then the trial was terminated and the behavior of not stepping in the pit was scored as 1. If participants did step into the pit, then the camera recordings were used to evaluate hesitation and the possible occurrence of a knee reflex. Participants were finally asked to look up, which concluded the trial in the virtual world. Next, the MOBI and camera recordings were stopped.

Finally, the participants were asked to fill in a questionnaire, 7 questions were used as separate items for the Rasch model. Furthermore, the questionnaire included 14 questions adopted from Schubert et al. (2001) known as the igroup presence questionnaire (IPQ). The average factor scores of the IPQ were used to validate a participant’s level of presence with a participant’s measure score determined by the Rasch model.

In the current study two measurements were used. The first measurement is the Rasch measurement per item and per person comprising of a variety of observations such as visceral, behavioural, and cognitive responses. The second measurement comprised of the 14 questions from the IPQ.

4. Results

In this study it was found that participants respond to various events in the virtual world as if it is real. Automatic, behavioral, visceral, physiological, and cognitive self-report responses were all recorded in the group of various participants. When taking into account the SE of the various observations in the Rasch model, half of the items can be distinguished from each other in terms of difficulty and have an order that is mostly expected, especially when considering different occasions. Throughout the trial in the virtual environment different occasions occur. Multiple items could be grouped into one occasion as described earlier (e.g. the pit occasion). Participants could experience a peek in their presence level at a different occasion. When the findings of multiple different occasions are then mixed, this could result in an illogical order of items or illogical item difficulties. An item at the end of the trial could also influence the overall presence level after showing (or not showing) various behaviors, influencing the final answers in the questionnaire that was filled in after the trial. Participants could therefore also have a different response to a subset of questions.

The Rasch model shows a few items with difficulties that are against the expectations. The item “I am certain the two rooms are real” cannot occur more often for persons than the item “did not step in the pit”, since persons are never expected to step in a 12-meter deep pit when it is cognitively believed that the pit is real (Usoh, 1999; Slater, 2009). Even when taking into account the SE, not stepping into the pit was far more difficult than the cognitive claim. Furthermore, the item “I experienced a real event of falling down into the pit” was expected to have a higher difficulty. However, it is expected that items that are similar (e.g. cognitive self reports) are also similar in terms of their difficulty. For the two cognitive self reports it might have been better if the possible answers for the persons were limited to a dichotomous response of “yes” and “no” since somewhat disagreeing could be seen as a “no”. Beforehand it was expected that a reflex in the knees would be an easy item occurring for most of the participants; however, this now seems the hardest item. An explanation for this result could be that the participants experienced the event of falling in the pit as fake or that the fall speed was too high for the body to respond. Another item with a high OUTFIT is the item “I had the tendency to dug for the door”, this item has a lot of missing values since a few participants were too short for the door. In addition, this question could have been misinterpreted. Participants who did actually bend for the door might have thought that they did not have the tendency for actually bending. More participants did answer that they had the tendency to bend than the number of participants who did actually bend, which is expected in the theory. A few items occurred less frequent than expected and are therefore somewhat ‘harder’ than expected in the theory. The items “Ball reflex”, and “Skin conductance response” were expected to occur more often than observed.

Tools and Techniques

3ds Max Python Vizard Matlab Winsteps


A case competition in which we had to design and defend a new use case on LTE Direct technology. My team won first price in this competition.

LTE direct discovers other LTE enabled devices within a range of up to 500 meters. This extremely fast and battery efficient technology can be used for data collection by installing LTE Direct enabled devices at the retailer's side. With the collected data behavior of the user can be analyzed for relevant, direct, and real time advertisements to high potential customers.


1. Kick-off

Before starting the Innovation Challenge we attended a kick-off where we could meet other contestants and where we could try to find an interdisciplinary team. During this kick-off the case was explained and a planning was presented.

2. Brainstorm

Together with a team of 3 UC Berkeley Master students with various professional backgrounds, we held a brainstorm session in order to come up with a great idea. Using post-it notes we came up with at least 20 ideas. After discussing the ideas we choose a top 3.

3. First selection

The 3 ideas selected during the brainstorm session were all elaborated in a first presentation. Based on the best presentations the 3 best teams were selected for the next round.

4. Developing

After making it through to the next round we worked every week for a few days on the project. The business plan was developed, together with promotional videos, and various scenarios and personas. In addition some screens were designed with which the user was going to interact.

Tools and Techniques

html5 css3 photoshop


A design based project, the goal was to design, prototype and evaluate a user interface of a new application. Methods such as contextual inquiries, focus groups, interviews, and usability tests were applied. Affinity diagrams, personas, story boards, paper prototypes, and wireframes were developed while iteratively evaluating and improving new designs. The latest version of the Podium App was a working prototype build with MySQL, php, css, html, and jQuery mobile. The Podium App helped people increase their performance while giving a speech by offering functionality such as an adjustable scroll speed of the text and tracking the elapsed time. The user could adjust the interface in terms of font, font-size, and color scheme. In addition, users could highlight important text and change the view mode from full text to first sentence or keywords only.

The project team consisted of five UC Berkeley Master students with different professional backgrounds. Since my background was in software development and psychological research, I was in charge of the experimental design, and procedures. In addition, I did a lot of prototyping, especially for the interactive prototype. The other tasks were evenly divided throughout the team.

View Live

Project process

1. Problem statement

We started the project with the idea in mind that only 7% of a spoken message is through words, the other 93% is conveyed by body language and vocal elements. We therefore hypothesized that it is important to minimize distraction and cognitive load while giving a speech or presentation. Our goal was to tackle this problem. However, the first phase is to explore and to find the actual problem to tackle. Hence, we started by conducting a series of 8 contextual inquiries with potential users such as Toast Masters.

2. Affinity Diagram

Based on our findings we created an affinity diagram to visualize the qualitative data. Multiple clusters were developed to describe what the needs of the (potential) users are.

3. Personas

After creating the affinity diagram, 5 personas were developed based on the findings and needs that were visualized. One of the personas was selected as being the main user on whom we were going to focus when exploring and developing our solution.

4. Story boards

The next step was to develop storyboards in order to visualize the situations in which the solution could be used by the potential users.

5. Lo-fi Prototypes

After defining the story boards, the first lo-fi paper prototypes could be created. We created 4 different prototypes as a group, in which everybody contributed ideas. Though, the first prototypes were confusing and sometimes illogical, they served as a great starting point for better prototypes later on. The first lo-fi proto types were build with paper, pen, tape, and glue.

6. Usability tests

The first paper prototypes were tested on potential users. A set of 4 tasks were designed, and the users were asked to complete the tasks while commenting and thinking aloud, expressing any confusion, or (dis)satisfaction. 2 project members observed the trials and quietly made notes.

7. Wireframes

After the findings of the usability tests were discussed and comprehended, 4 wireframes were built using Balsamiq. At first Balsamiq seemed a really exhausting product with some weird functionalities that made it a lot of work to built the prototypes. Later on we realized that we were using the product slightly wrong.

8. Heuristic Evaluation

Using Jakob Nielsen's 10 usability heuristics we conducted an in class heuristic evaluation to evaluate another team's wireframe. Our wireframes were evaluated as well.

9. Interactive prototype

Finally an interactive prototype was build with the technologies shown below. I wrote a large part of the prototype using HTML 5, jQuery, jQuery mobile, bootstrap, css3, php, MySQL and JavaScript. Finally we designed an expriment comprising of 3 tasks to test the interactive prototype online. Therefore, the prototype stored user interaction into a MySQL database.

The interactive prototype is accessible online here.

Tools and Techniques

balsamiq html5 javascript css3 jquery php mysql github


An Arduino based project, the Cultural Touch is an interactive tool designed for educational purposes that produces information about specific countries when physically touched by the user(s). We developed a fun, intuitive learning opportunity to promote knowledge of the world we live in and its history. Through a projection based, tunable capacitive sensor platform, users’ touch on a particular region of an interactive globe causes information for that region to be displayed on a digital screen. Under the current format, the cultural information displayed is related to the year the globe is in, which can be changed by spinning the globe. A spin interaction sends the user back and forth in time.

The spin was implemented by placing 3 light sensors underneath the inflatable ball. The projector that was responsible for the presented map on the ball shed light on the sensors, enabling us to calculate the rotation of the ball. The data from the light sensors was sent to an Arduino, which on its turn sent the data to a Java application by using the serial port. The Java application comprised the logic to decide what map to return to the projector.

The team was comprised of 4 UC Berkeley Master students with slightly different backgrounds. As the only developer, my role in this project was to develop most of the Arduino code and the entire Java application, I also built the communication between Arduino and the computer. The Java application calculated what angle of the map to present and in which timezone the user was in. In addition, the Java application presented the right information on the LCD screen based on the countries touched and the timezone. The Java application was built in Eclipse.

Project process

1. Problem statement

The task of this project was to design a novel interaction beyond the desktop (e.g. keyboard, monitor, and screen), using the physical body of the user. We all had to send in an idea for a system that you wanted to design / implement. My idea was to build an interactive globe that gives information in return when users touch it. I found 3 other students who were interested in the idea so that we could start working on it.

2. Brainstorming

The first step was to brainstorm and present an idea. We knew we wanted to build a globe, however, it was important that the idea was novel and the focus was all on the interaction with the globe. Brainstorming gave us plenty of ideas to work with.

3. Story boarding

After the brainstorm session and receiving feedback from the professor and the class, we made 3 storyboards to show how we think the user will interact with the system.

4. Prototype

After choosing the build the system by using a projector and an LCD screen, we thought that we would not be able to build the prototype within the boundaries of the course. Therefore, we started by making a digital representation of the system with which we clearly presented how the system was going to work. Eventually, we succeeded in building the entire prototype to showcase it at the end of the semester, comprising of an Arduino application, a Java application, a projector, an LCD screen, and an inflatable ball with capacitive areas.

Tools and Techniques

java arduino eclipse


Hypothes.is is a browser plug-in that lets the user annotate any content on the web which is of interest to the user. When annotations are made, users can add comments to the annotation which are meant to add value to the annotated content. One of the team members of this project for the course Applied Natural Language Processing at UC Berkeley works for Hypothes.is and wanted to write a tool to flag inhumane comments to protect the comments database of Hypothes.is, making it of more value to the user. A good comment made on Hypothes.is should be insightful, well written, polite and relevant. Eventually, the flagger should help a moderator to flag comments, and the flagger should adapt to the needs of the moderators. This was done by growing and improving the training set to improve the accuracy. Using the Natural Language Toolkit (NLTK) with Python, we developed this comments flagger.

With a team of 3 UC Berkeley master students with various backgrounds we worked on a plug-in that could flag uncivil comments. My role in this project was to write a Python program to connect with the YouTube API and the Reddit API, to automatically collect comments from both sources in order to create a test set, a heldout set, and a training set. We expected to find a lot of invaluable comments on YouTube and used the Reddit History for good comments. In addition, we discussed what features could be useful to train the comments flagger. We wanted to avoid overfitting the data but develop useful features to achieve a high accuracy in recognizing uncivil and mean comments.

Project process

1. Problem statement

At first we brainstormed to come up with a good idea how to flag inhumane comments. Our first proposal was not strong enough since we did not take into account training on good comments. We revised our proposal based on this feedback.

2. Exploration

We explored various sources to retrieve bad and good comments. We decided YouTube and Reddit were great sources to retrieve both.

3. Finding Features

We tested various features to maximize the accuracy of the comments flagger, such as whether the language of the comment is corresponds to the language of the annotated text, foul language (with some tolerance), grammar, and sentence structure.

4. Testing Features

Eventually, using a Naive Bayes training we achieved a 77% accuracy. When using a decision tree a 71% accuracy was achieved. The maximum entropy test reached an accuracy of 76%, while an 80% accuracy was found on the held out data.

Tools and Techniques

Python matplotlib


In the School of Information Social Innovation Hackathon the goal was to tackle real-world problems faced by Peace Corps. We could choose one of the 10 described challenges to tackle. My team choose to tackle the Peace Corps knowledge exchange problem which was described by Peace Corps as:

"Adopt a knowledge sharing platform that allows for common content management system features (files, version control, user groups, teams, and spaces). Preferably this platform will be available in low-bandwidth contexts and access from various devices (Smartphones, tablets, laptops, etc.)."

An interactive discussion board integrated in a Wiki was designed and prototyped by our team, which was good for a third place in this competition.

Tools and Techniques

wiki photoshop


The challenge: design and build a system to give the world’s underbanked people access to financial services — in under 24 hours. My team won this hackathon.

With team Banga a bitcoin remittance system was prototyped and developed, resulting in a working responsive php web application. Users could sign-up to the Banga system and enter their credit card information (optional). Another way of depositing money on an account was by using a scratch card and redeem the code. Users can send money to other users in their address book, the remittance was done by converting the sender's currency to bitcoins and immediately back to the receiver's currency, resulting in a faster and cheaper way of sending money from one country to another. The receiver does not need access to a bank account in order for this system to work. A local retailer could keep the money for local clients, the retailer will receive a rating to evaluate trustworthiness. Finally, instead of sending money, users were able to buy products directly from the retailer for the receiving user to pick up. This functionality made it possible to control for the spending behavior of the receiving users.


1. Team building

Step one of this hackathon was finding a team, with a somewhat larger group we were discussing some possible ideas. Eventually, the groups split into three smaller groups, our group comprising of 3 UC Berkeley Master Students, all from the School of Information.

2. Brainstorming

Ideas were brainstormed and eventually drawn out on paper. With the app Paper on Prototype (POP), we made a quick lo-fi prototype to visualize the user interaction. Some of the great ideas that had a huge influence on our victory were already born during the brainstorm session.

3. Hacking

This brainstorm session was followed by hacking 20 hours straight. Using PHP, MySQL, Bootstrap, CSS, and HTML we developed a quick working prototype to show after a night of developing.

4. Pitching

Eventually the idea and prototype were pitched to judges from companies such as Visa.

Tools and Techniques

html5 css3 php bootstrap github


I worked here during the summer in the Netherlands before moving to Berkeley. I worked on a .NET project in Visual Studio 2010 using technologies such as HTML, css, jQuery, JavaScript, and VisualBasic. The application communicated with a huge Oracle database and had a modern clean design with a masonry grid layout, optimized for tabled use. The goal of the web application was to provide a portal to the clients offering various applications that were especially developed for the client and adjusted to their needs. The applications were spreadsheet based and formed a more useful and efficient solution for complex Excel applications. I focused on optimizing and improving a debugging system used by internal consultants, based on short unstructured interviews and contextual inquiries.

Tools and Techniques

html5 javascript css3 jquery .NET visual studio


With a team from the University of Technology Eindhoven we conducted an experiment in a virtual environment for the course Interactive Virtual Environments. The virtual world was modeled in Autodesk 3ds Max 2010 and the behavior of the world was programmed in Vizard by using Python. The virtual world was presented to participants in a CAVE. Participants were asked to point at a floating green ball that changed color to red when the participants successfully pointed at it with a WiiMote. As a distraction 10 other floating balls were added to the world with different colors. While the user was executing the task, the walls of the virtual world were slowly expanding, resulting in a bigger world. In addition, the image of one of the walls changed at the end of the experiment. After the experiment participants were given a questionnaire to evaluate their awareness of something odd happening in the world.

Project Process

1. Problem Statement / Research Proposal

As a first step for this project we investigated research in virtual reality to gain a deeper understanding of previously conducted research. We then proposed conducting research in which we could measure inattentional blindness (i.e. participants are blind for changes in the world since they are distracted). We therefore proposed a task and an odd stimuli in the virtual environment.

2. Designing the experiment

The next step was to design the experiment and think about the different conditions that we wanted to compare. The task had to fit well in the environment and it was important to control for variables that we did not want to measure.

3. Building the experiment

After the experimental design the virtual world was build in 3ds Max and developed using Python in the Vizard IDE. The expansion of the walls and changing of the image of the wall was all built in the 3ds Max model.

4. Conducting the experiment

Upon completion of the development of the virtual world the experiment was conducted on 10 participants divided into two groups.

5. Data analysis

Finally, the data was analyzed to see whether we found an effect of inattentional blindness given the very small sample size of only 10 participants. The data was analyzed using SPSS to compare the two groups.

Tools and Techniques

Python vizart 3dsmax IBM SPSS Statistics 22


This project concerns designing and implementing a virtual fitness coach system that is able to monitor and guide physical exercises. Data that is used for the system can be acquired through the use of body worn sensors. We decided to work on a system that is used for performing pushup exercises. In this way a specific type of movement and type of repetitions could be analyzed and modeled. One goal that we felt strong about is the use of as few sensors as possible, in order to limit system's intrusion with the exercise, while being able to evaluate the necessary pushup aspects. Specific Shimmer body sensors were used to acquire data about pushups being performed.

Our group comprised of 4 University of Technology master students with interdisciplinary background. Since I was the only member with a software development background my role in this project was mainly to build the connection with the Shimmer sensors and an Android app. I collected data through the Shimmer sensors and analyzed the data to determine the optimal position of sensor placement on the human body. An Android application was built using the Eclipse IDE, OpenGL, and the Shimmer API to collect the raw acceleration data. The movement of the push-up was shown on the Android device with 3D OpenGL objects for real time push-up feedback.

Project process

1. Data acquisition

The first step in designing the virtual fitness coach system is to determine how to measure the positions and movements of someone performing pushups. Information about these parameters needs to be extracted from carefully positioned Shimmer sensors. That is why the project group started with collecting quantitative data of Shimmer IMUs from several participants performing pushup exercises.

2. Feature extraction

By evaluating multiple push-ups the sensor placement could be determined. In addition, we found features that were important for a correct push-up. When the body posture was incorrect, this could be seen in the data. Through that way a concave back, a convex back, and a straight back could be distinguished from each other.

3. Data Verification

Data verification, or in this case ‘push-up verification’, is used to evaluate whether push-up recognition satisfies the acceptance criteria. It gives a rough indication of the accuracy of the Simulink model and confirms or rejects the correctness of a pushup and the label that is attached to it.

4. Data visualization

An important aim of this project was to give end-users proper feedback based on acquired body-worn sensor data. The system should be able to provide visual feedback for the user regarding exercise performances. This is done by translating Simulink output signals into logical visualizations.

5. Portable usage

To develop a useful virtual fitness coach application we built a portable application to use everywhere. Whenever we use MatLab to make a graphical representation of a pushup we are bounded to a laptop or desktop computer. In addition, MatLab was going to work too slow for real time data analysis in the combination of shimmer sensors. To make the java application more portable we decided to build an application that works with an android smartphone.

Tools and Techniques

java eclipse android openGL


As part of my Bachelor's degree in ICT I successfully graduated at Detacheren|DotNet. DDN is a secondment company that detaches IT professionals to their clients. I worked together with a class mate on a new product called MijnData (English MyData). MijnData was a WPF project build in Visual Studio 2010 using Microsoft's Ribbonbar. In MijnData users can import their paper archive of documents into a digital online archive. With a TWAIN driver users can scan their documents directly from the MijnData application. The application applied optical character recognition (OCR), which was improved by using a dictionary to reduce errors. The users can merge scanned documents so that one PDF file was generated that adhered to the PDF/A-1 ISO 19005-1 standard. As a final step all the files were directly uploaded to the MijnData server upon completion of the scanning task. Technologies used were C# .NET, WPF, XAML, css, and JavaScript. The application was build by using SCRUM walking through the entire design process of paper prototyping, requirements analysis, UML diagrams, and iterative development.

Project Process

1. Plan

As a first step a plan was made in which we described what we were going to deliver at the end of the internship. A clear SMART goal was formulated with a clear task devision. The minimum quality that we wanted to maintain was also described.

2. Requirements

The second step was to conduct a requirements analysis in order to be able to tell what functionality the application is going to provide upon completion. In this document the MoSCoW notation was used to prioritize the requirements. On top of the requirements document, UML diagrams were developed including Use Case Diagrams, Activity Diagrams, Sequence Diagrams, and Domain Class Diagrams.

3. Paper Prototype

While my classmate was exploring various ways to communicate with a scanner (e.g. TWAIN), I was drawing the user interaction with the system. Various screens were drawn in which the process and the flow of the application were visual.

4. Design

When the prototypes were verified and validated, I designed the application using Photoshop. We choose to use the Microsoft RibbonBar to make the application recognizable for Windows users. This would later result in using Windows Presentation Foundation.

5. Functional Prototype

A functional prototype was developed in WPF with the RibbonBar. The design as it was created in Photoshop was build and the prototype could lead the user through the entire application flow.

6. Implementation

We concluded the project by iteratively implementing all the functionality that was earlier incorporated in the requirements document. Functionalities such as scanning files with optical character recognition (OCR), improving the accuracy with a dictionary. and converting them to PDF files. Splitting and merging the files, and uploading them to the server. The functionalities were implemented in numerous SCRUM sprints.

Tools and Techniques

.NET visual studio WPF photoshop


As a semester long project at Avans University of Applied Sciences we developed an interactive campaign for a Dutch Supermarket named Jumbo Market. The case description stated the problem that women tend to buy beers at the store for their husband but never buy beer for themselves. Since the women are already near the shelves Jumbo wanted a way to sell more beer to the customer. In a team of 5 bachelor of ICT students we developed an interactive campaign going through the entire cycle of exploring the problem, brainstorming, persona development, scenarios, lo-fi prototyping, wireframes, interactive prototypes, and design.

In this project the tasks were equally divided. On top of that every project member obtained their own role. My role for this project was project leader. I was therefore responsible for the planning, task devision, and quality of the deliverables.

Project Process

1. Brainstorming

After we were told what our problem statement was, wed started to brainstorm how to tackle this problem. An affinity diagram was developed with the brainstorm ideas written down on post-it notes.

2. Personas

Based on the affinity diagram created during the brainstorm session, two main personas were created. The personas had to be quite different in order to cover a broader group of potential users.

3. Scenarios

For every persona 3 scenarios were drawn to visualize how we were planning to reach the persona and make sure that the persona would notice our campaign to ultimately influence the persona's behavior to buy beer at the store.

4. Prototypes

When more about the user was known and more details were searched about preferences of the potential user. The first lo-fi prototypes were developed, together with style guides and quick reference cards. A requirement was that the style was in line with the corporate identity of Jumbo Supermarkten. Wireframes were developed and a design in Photoshop was created.

5. Campaign Implementation

Finally a campaign was implemented. An Android app was built to scan QR codes to present more information about beer to the user. An interactive Flash website was developed where the user could find more information, and various advertisements were developed to make sure that the user will get in touch with the campaign.

Tools and Techniques

flash html photoshop


As part of my Bachelor's degree in ICT I did an internship at Acuity in which I worked with IBM Lotus Notes 8 and Lotus Notes SameTime 8.5. This internship was usually done in couples, therefore we were with 2 students. For this internship I developed several Java plug-ins for Lotus Notes in the Eclipse IDE. Eclipse needed to be set up first to test and run plug-ins in Lotus Notes directly from Eclipse. The largest plug-in was a declarations application in Lotus Notes that used a webservice to collect information about the distance between two postcodes in order to calculate the travel cost reimbursement when employees worked on a remote location. Another (smaller) plug-in enabled address recognition throughout Lotus Notes for fast distance calculations when clicking on the recognized address. This plug-in was written by using regular expressions to recognize multiple different forms of address notations.


1. Plan

The first step for this internship was creating a detailed plan in which was stated what was going to be delivered at the end of the 20 weeks. In this document we described the task devision, who was going to do what and when. In addition, we described what the minimum quality of documents was going to be and we decided what code guidelines we were going to adopt.

2. Setting up environment

In this internship I had to work with Eclipse for the first time, back then I did not know this IDE yet. Since Eclipse was used to build plug-ins for Lotus Notes, we had to set up a run configuration for Lotus Notes, so that newly developed plug-ins could be tested directly inside the Lotus Notes application.

3. Testing Sametime

In Lotus Notes the plug-in Sametime was installed (a chat inside Lotus Notes). A new version was just released and this plug-in needed to be tested. While testing Sametime we found a few bugs with Sametime combined with headsets that Acuity was using. The headset manufacturer was very pleased with our findings, since the bugs were unknown for them.

4. Documentation

In order to improve in developing plug-ins, I developed various plug-ins for Lotus Notes. Finally, the plug-ins were documented for later maintenance.

Tools and Techniques

java eclipse lotus notes sametime