Arch Comp Project

Just another WordPress.com site

Category Archives: Concept

Developing Interfaces – Portrait Designs

These designs are based on a post defined project and user system. The orientation of the themes are ideally set to portrait. Images can be enlarged when clicked.

Design Theme 1

The conceptual design theme being presented takes into account a horizontal navigation bar allowing access to the defined user to the various information types.

Idle Start/View Interface.
Alternative View 1:
Once label from navigational bar is clicked
Alternative View 2:
Once label from navigational bar is clicked. There are three different information designs to choose from. The top and middle options allow images to be viewed full screen when clicked, whereas the third allows the image to be viewed in the large box above the smaller image thumbnails.
  Alternative View 3:
Once label from navigational bar is clicked.

This design overlays information above the AR model, whilst still allowing it to be viewed. The information can be exited by clicking the x button in the top right hand corner.

  Alternative View 4:
Once label from navigational bar is clicked. In comparison to the other designs, the bottom navigational bar is use to change the viewing options of the AR Model, i.e. Solid, Wireframe or Structural for example.
Once the model is clicked, only then is information viewable by a pop up window, which can also be exited using the x button.
  Alternative View 5:
Once label from navigational bar is clicked.

This design readjusted the viewing screen so that it is smaller and appears above a bottom panel of information. This information can also be closed using the x button in the top right hand corner.

Design Theme 2

Idle AR View
Alternative View 1:
Once label from right hand side navigational bar is clicked this interface appears. The first demonstrates a translucent overlay box, whilst the second shows an opaque box design.
Alternative View 2:
Once label from navigational bar is clicked, these options may be seen. The labels come out to the side like pull out folders, once clicked again the information is retracted and the label returns to its original position.  The main difference between the designs is the frame of the information.
For the first and third design, once another label is clicked, the folder retracts and the clicked label slides out. Whilst for the second design, the information refreshes when a new label is clicked and only retracts when the left side label is selected.
Alternative View 3:
Once label from navigational bar is clicked.
The information then appears in the bottom panel, with the AR view rescaled in the above panel.
Information is closed by clicking the label again.
Alternative View 4:
Once label from navigational bar is clicked the views of the model are changed i.e. wireframe, solid etc

Once the model is clicked, a new interface appears with options to direct to different information. The bottom two designs compare the translucent and opaque choices.

Developing Interfaces – Choosing Projects & Users

There are a number of options when looking at these interfaces. I have chosen to look at going from choosing a project, then choosing the user to reach the AR information; as well as considering layouts where both the project and user are defined at the same time. Below are the interface options I have created.

Projects

Alternative View 1:

Drop down menu

Alternative View 2:

Scroll down directory (Address Book) – Full Screen

Alternative View 3:

Scroll down directory (Address Book) – Smaller Version

Users

  Alternative View 1:
This interface implements a drop down menu after a project has been defined.
Alternative View 2:
Users define who they are using the two successive interfaces

Projects & Users

Alternative View 1:
Both project and user are defined using one interface using a drop down menu.
Alternative View 2:
Both project and user are defined using one interface using a typical ipad options interface.
Alternative View 3:
Both project and user (specific user) are defined using one interface using a typical ipad options interface.

Developing Interfaces – Login

In developing a number of alternative Junaio interfaces, I have opted to keep to a simple and easy to use design. As a uniform design, I have stuck to a black horizontal border, with the Junaio logo on its left, with a back button on the right.

Alternative View 1:
The graphical layout of this login page is based on the Junaio channel creator and similar to the current login interface
Alternative View 2:
This layout opts for a typical iphone/ipad interface

Designing Interfaces

This slideshow requires JavaScript.

Physical Test

Today I purchased a 200,30,30 piece of wood to start constructing my physical model (at $6 to make around 30 blocks) I already had some white and red and blue paint lying around from a British flag I had painted and had to buy a tube of yellow for $2.50

I took out the table saw and got to work.

After a quick sand I was left with fairly similar blocks to start my puzzle.

Today I coated them all white as a base coat and will decide whether to emulate the Kohs test with all 4 colours or just to a simple test with red and white cubes.

I need the test to mimic what I create on the computer and so I haven’t

decided.

Processing Textured Cube (with source code)

Code based of Dave Bollingers TexturedCube post on the learning page of Processing.

PImage pix;
PImage pix2;
PImage pix3;
PImage pix4;
PImage pix5;
PImage pix6;
float rotx = PI/4;
float roty = PI/4;

void setup()
{
size(640, 360, P3D);
pix = loadImage(“pic.jpg”);
pix2 = loadImage(“pic2.jpg”);
pix3 = loadImage(“pic3.jpg”);
pix4 = loadImage(“pic4.jpg”);
pix5 = loadImage(“pic5.jpg”);
pix6 = loadImage(“pic6.jpg”);
textureMode(NORMALIZED);
fill(255);
stroke(color(44,48,32));
}

void draw()
{
background(0);
noStroke();
translate(width/2.0, height/2.0, -100);
rotateX(rotx);
rotateY(roty);
scale(90);
TexturedCube(pix,pix2,pix3,pix4,pix5,pix6);
}

void TexturedCube(PImage pix,PImage pix2,PImage pix3,PImage pix4,PImage pix5,PImage pix6) {
beginShape(QUADS);
texture(pix);

// +Z “front” face
vertex(-1, -1, 1, 0, 0);
vertex( 1, -1, 1, 1, 0);
vertex( 1, 1, 1, 1, 1);
vertex(-1, 1, 1, 0, 1);
endShape();
beginShape(QUADS);
texture(pix2);
// -Z “back” face
vertex( 1, -1, -1, 0, 0);
vertex(-1, -1, -1, 1, 0);
vertex(-1, 1, -1, 1, 1);
vertex( 1, 1, -1, 0, 1);
endShape();
beginShape(QUADS);
texture(pix3);
// +Y “bottom” face
vertex(-1, 1, 1, 0, 0);
vertex( 1, 1, 1, 1, 0);
vertex( 1, 1, -1, 1, 1);
vertex(-1, 1, -1, 0, 1);
endShape();
beginShape(QUADS);
texture(pix4);
// -Y “top” face
vertex(-1, -1, -1, 0, 0);
vertex( 1, -1, -1, 1, 0);
vertex( 1, -1, 1, 1, 1);
vertex(-1, -1, 1, 0, 1);
endShape();
beginShape(QUADS);
texture(pix5);
// +X “right” face
vertex( 1, -1, 1, 0, 0);
vertex( 1, -1, -1, 1, 0);
vertex( 1, 1, -1, 1, 1);
vertex( 1, 1, 1, 0, 1);
endShape();
beginShape(QUADS);
texture(pix6);
// -X “left” face
vertex(-1, -1, -1, 0, 0);
vertex(-1, -1, 1, 1, 0);
vertex(-1, 1, 1, 1, 1);
vertex(-1, 1, -1, 0, 1);

endShape();
}

void mouseDragged() {
float rate = 0.01;
rotx += (pmouseY-mouseY) * rate;
roty += (mouseX-pmouseX) * rate;
}

Kohs Block – Research / User Test

Was going through the original Kohs-Block-Design tests (1920) paper and decided to create colour version of all the black and white scenarios.

This helped me to understand the puzzles more and create them in a digital sense.


Also started trying to code this environment in processing. Managed to get a cube with all 6 sides textures from 6 separate *.jpgs  and controlled via the mouse. However I need to change it so that click on each surface is split in two 4 regions and rotates the cube according to which region you click. I also need it to save the change once you let go of your mouse. Also want to add a timer function

UPDATE: Decided to stick to just red and white bricks so I edited the diagrams. The ticked designs were the original red and white test while the others are edited versions of the blue and yellow tests.

 

Kohs Block

http://www.stoeltingco.com/stoelting/productlist13c.aspx?catid=2077&home=Psychological

 

http://upload.wikimedia.org/wikipedia/commons/6/66/Kohs-Block-Design_tests-1920.pdf

 

http://brain.oxfordjournals.org/content/129/7/1789.full

The Kohs block design test requires the participants to recreate 9 red and white blocks to match a small image. While performing this task you are timed. Looking online I was unable to source the puzzle as I am not a registered doctor or have the necessary funds they were asking for.

I also learnt that the test is undertaken to test the mental age of 3-19years and takes appropriately 40 minutes for individual administration.

The KOHS Block test is a performance test designed to measure intelligence. The test taker must, using 16 coloured cubes, replicate the patterns displayed on a series of test cards. Because the instructions are easily communicated, the test can be administered to language or hearing handicaps. The test was fleshed out by sociologist Samuel C. Kohs, around 1923, building on earlier and similar designs.

Microsoft Research NUI

Today I spent researching and stumbled across the following resources from Microsoft regarding NUI.

The image provides me with alot of research undertaken by Microsoft with peoples opinions towards NUI. However its a shame that the chart doesn’t have any information from Australia.
The image was found on http://blogs.technet.com/b/microsoft_blog/archive/2011/01/26/microsoft-is-imagining-a-nui-future-natural-user-interface.aspx The Official Microsoft blog.

” A recent poll we conducted of about 6,000 people across six countries showed how nascent NUI is: Only about half of the respondents were familiar with the various emerging dimensions of NUI, such as 3D simulation technology. Yet nearly 90 percent of all audiences view natural and intuitive technologies as more than a fad. They believe these technologies are the way of the future.”

Kinect NUI

http://www.kconnolly.net/ (I recommend you don’t visit this link as it appears to me that Kevin has some absurd views of the world and his blog is his display of that)

However, he has several Youtube videos of his progress/implementation of a Kinect NUI which I have since installed and tested.

Credit where credit is due though what he has implemented is amazing and a step in the right direction. You can watch his progress through his updates showing his progress and what features he has implemented.

 

Here is an edited version of the readme:

Gestures start disabled. The first gesture you do must be the “Enable Gestures” gesture.

Both hands slowly move up to Enable Gestures. Move up about 18 inches in a one-second timeframe.

Both hands slowly move down to Disable Gestures. Move down about 18 inches in a one-second timeframe.

Both hands start from the belt area and move outward and up (like in Disco) to Zoom In via Windows Magnifier.

Both hands start at the top outer corners and move quickly toward the belt area to Zoom Out via Windows Magnifier.

While in Magnified (zoomed in) mode:

– Moving your pelvis from side to side (i.e. walking) will scroll horizontally.

– Moving your left hand up and down will scroll vertically.

With your hands horizontally aligned (same X axis), move swiftly inward to a center point directly in front of you to enter Flip3D mode.

While in Flip3D mode:

– The selected window will track your right hand, but in relation to the left hand’s position. Both have to move to scroll.

– Move left hand back and right hand forward to “push” the window stack back.

– Move right hand back and left hand forward to “pull” the window stack toward you.

– Place both hands together directly in front of you and push away along the X axis (sideways) to exit Flip3D mode.

With your left hand about 6″ left of your left shoulder, move your left hand forward about 18 inches to open the Pie Menu.

Note the Pie Menu really doesn’t do anything yet. Forthcoming feature.

While not in Magnified, Flip3D or other special modes, push your right hand forward about 18 inches to “grab” the currently active window.

– Move your right hand around to move the “grabbed” window. Remember this is in XY space in relation to the Kinect sensor, not the monitor(s).

– Move your right hand back (away from sensor) about 18 inches to “release” the “grabbed” window. It should stay where you put it.

 
As you can see some of the gestures just dont make sense such as:

Moving your pelvis from side to side (I would rather voice recognition be implemented alongside this and then re-using simple gestures such as moving your hand left and right)