Sounds we need to collect
1. Ambient Sound - (loop sound)
2. Clear Button: (bulldozers etc)
3. Day Button - (bird chirping)
4. Night Button - (crickets, random single owl)
5. Tree Movie Clip button: Leaves rustling
6. Wool/Clouds button: gust of wind (not to windy though)
7. Birds Button: Wings flapping, chirps
8. Birds Nest: flapping, eggs moving and hatching, baby birds chirping
9. Sandpaper: Scratchy Noises?
10+11: Volume up and down:
12 Fluffy Button: soft popping noises
13: Rainstorm button: Soft rain, soft thunder rumbling
14. Chaos: Earthquake
Tuesday, May 20, 2008
Thursday, May 8, 2008
PROTOTYPE APPRAISAL REPORT
Better Than Everyone Else: ‘mirror’
This team has demonstrated a solid design process and good background research throughout its development. It seems that the trip to the art gallery has brought forth many new ideas that they are taking into consideration. Ideas such as how people interact with displays, both interactive and static, have significantly added to the projects overall appeal. For instance having an image blur when approached by someone through motion detection has the effect of engaging that person with ‘mirror’ and grabs their attention and interest.
I would encourage them to keep applying what they have learned, as I think that there is room for improvement. For example I feel that involving people more would improve the interaction with the technology and what is being displayed. So far, as I understand it, once the mirror is approached and the image is un-blurred, depending on their distance, there seems little else ‘control’ (or interaction for that matter) over what is being showed. Would it be possible to allow a person to browse between images shown for example? On another note I think that the intended use of audio is great, as it really informs what is being displayed. Overall I’m pretty excited to see the final result. Good work.
Static Synergy: ‘multi-touch table’
Unfortunately due to technical difficulties the completed prototype wasn’t available to be demonstrated in the presentation. The demo video shown however, during the prototype presentation was great. Even though it was very short it managed to convey the feeling and overall idea of the project very well. It would have been good to incorporate more detail in how you would actually cut/copy/paste/drag/delete. As mentioned in the class before, I think that the use of different scenarios would have assisted in describing the different interactions with the system and the users, as the ‘multi-touch table’ is a system for virtual file browsing and sharing. For example starting from how two people would go about getting their files off devices and onto the table. What steps would be necessary and if it required a certain sequence for both users to perform? I am also interested in how a person would use his/her hand. Would it mean something else if they’d use two fingers when dragging instead of one (being the difference between drag and copy-to)? This could eliminate the need for text or drop down menus, which I think would be a good direction to take it into (with this it might be helpful to have some sort of icon/graphics that would help the user understand what options they have). On a last note, the use of audio would help in giving users feedback to the actions and tasks that they are currently trying to perform. I’m really interested in seeing how the final table will result. Good work.
This team has demonstrated a solid design process and good background research throughout its development. It seems that the trip to the art gallery has brought forth many new ideas that they are taking into consideration. Ideas such as how people interact with displays, both interactive and static, have significantly added to the projects overall appeal. For instance having an image blur when approached by someone through motion detection has the effect of engaging that person with ‘mirror’ and grabs their attention and interest.
I would encourage them to keep applying what they have learned, as I think that there is room for improvement. For example I feel that involving people more would improve the interaction with the technology and what is being displayed. So far, as I understand it, once the mirror is approached and the image is un-blurred, depending on their distance, there seems little else ‘control’ (or interaction for that matter) over what is being showed. Would it be possible to allow a person to browse between images shown for example? On another note I think that the intended use of audio is great, as it really informs what is being displayed. Overall I’m pretty excited to see the final result. Good work.
Static Synergy: ‘multi-touch table’
Unfortunately due to technical difficulties the completed prototype wasn’t available to be demonstrated in the presentation. The demo video shown however, during the prototype presentation was great. Even though it was very short it managed to convey the feeling and overall idea of the project very well. It would have been good to incorporate more detail in how you would actually cut/copy/paste/drag/delete. As mentioned in the class before, I think that the use of different scenarios would have assisted in describing the different interactions with the system and the users, as the ‘multi-touch table’ is a system for virtual file browsing and sharing. For example starting from how two people would go about getting their files off devices and onto the table. What steps would be necessary and if it required a certain sequence for both users to perform? I am also interested in how a person would use his/her hand. Would it mean something else if they’d use two fingers when dragging instead of one (being the difference between drag and copy-to)? This could eliminate the need for text or drop down menus, which I think would be a good direction to take it into (with this it might be helpful to have some sort of icon/graphics that would help the user understand what options they have). On a last note, the use of audio would help in giving users feedback to the actions and tasks that they are currently trying to perform. I’m really interested in seeing how the final table will result. Good work.
My reviews for Shadow Monsters and BrightT
The Shadow Monsters
The Shadow Monster’s learning table has been designed for children aged 4-5 and the working planning and research they have done into this age group really reflects on their design for the table. When they initially presented their project in the proposal they were debating on the best place to put all the technology (RFID readers, projectors etc) for the table. Their first initial idea was to place all the technology underneath the table, but after their observation of little children in classrooms, they clearly understood that safety would be a main concern which lead them to placing all harmful objects and equipment out of reach above their heads mounted on the ceiling. In terms of design the group has obviously show signs of careful thought process on how children normally use designated work spaces, this shows in their further ideas on placement of the table (in a corner) in the learning environment.
My only suggestion at this point is careful consideration on what objects they use to put RFID tags on for children to identify. Children especially at such a young age are prone to breaking things and in the process hurting themselves. So I suggest that any objects you use not being fragile, breakable objects with sharp edges. These objects should also be familiar with them in everyday life so that they have some idea of what the object is, however children should learn more about objects they don’t come into daily contact with so one or two of the objects would be good if they had no prior knowledge of what it was or what it did. I’m looking forward to seeing your table in action and little children learning in amazement.
BrightT
The BrightT concept has come very far since we last saw it in the project proposal, when we first saw it all I new is that you were planning to design t-shirts with LEDs embedded in them. I was a little unsure on how LEDs would work in clothing, in terms of suitable situations where it would be useful or acceptable to wear them, as well as the idea of washing clothing with electronic devices sewed into them. However focusing on how the shirts will be used in terms of connecting with others overlooks all those practicality issues.
The designs shown in the presentation for the boys and girls t-shirts where interesting with the LEDs embedded in certain locations on the t-shirt, though I was confused on whether these shirts would sensor the proximity or whether they were purely decorative. The only thing I can think of to suggest at this present time is to make sure you figure out a way to hold the phidget and battery pack for the LEDs in a carefully concealed and not in the way method. From the ideas you mentioned in the presentation, I don’t think creating a pocket on the back or side of the shirt to put it in would work as it would make one side of the shirt heavier than the rest and thus stretch the shirt. Maybe creating a pouch to put the phidget and battery pack in it with a clip to attach it to the back of their jeans or hooked in a belt loop would work better, although I can’t say for sure as I haven’t felt how heavy it all weights. Apart from that I’m looking forward to seeing how people wearing the shirts interact with eachother.
The Shadow Monster’s learning table has been designed for children aged 4-5 and the working planning and research they have done into this age group really reflects on their design for the table. When they initially presented their project in the proposal they were debating on the best place to put all the technology (RFID readers, projectors etc) for the table. Their first initial idea was to place all the technology underneath the table, but after their observation of little children in classrooms, they clearly understood that safety would be a main concern which lead them to placing all harmful objects and equipment out of reach above their heads mounted on the ceiling. In terms of design the group has obviously show signs of careful thought process on how children normally use designated work spaces, this shows in their further ideas on placement of the table (in a corner) in the learning environment.
My only suggestion at this point is careful consideration on what objects they use to put RFID tags on for children to identify. Children especially at such a young age are prone to breaking things and in the process hurting themselves. So I suggest that any objects you use not being fragile, breakable objects with sharp edges. These objects should also be familiar with them in everyday life so that they have some idea of what the object is, however children should learn more about objects they don’t come into daily contact with so one or two of the objects would be good if they had no prior knowledge of what it was or what it did. I’m looking forward to seeing your table in action and little children learning in amazement.
BrightT
The BrightT concept has come very far since we last saw it in the project proposal, when we first saw it all I new is that you were planning to design t-shirts with LEDs embedded in them. I was a little unsure on how LEDs would work in clothing, in terms of suitable situations where it would be useful or acceptable to wear them, as well as the idea of washing clothing with electronic devices sewed into them. However focusing on how the shirts will be used in terms of connecting with others overlooks all those practicality issues.
The designs shown in the presentation for the boys and girls t-shirts where interesting with the LEDs embedded in certain locations on the t-shirt, though I was confused on whether these shirts would sensor the proximity or whether they were purely decorative. The only thing I can think of to suggest at this present time is to make sure you figure out a way to hold the phidget and battery pack for the LEDs in a carefully concealed and not in the way method. From the ideas you mentioned in the presentation, I don’t think creating a pocket on the back or side of the shirt to put it in would work as it would make one side of the shirt heavier than the rest and thus stretch the shirt. Maybe creating a pouch to put the phidget and battery pack in it with a clip to attach it to the back of their jeans or hooked in a belt loop would work better, although I can’t say for sure as I haven’t felt how heavy it all weights. Apart from that I’m looking forward to seeing how people wearing the shirts interact with eachother.
Report Reviews thingys
Hey gang,
are we gonna post up our report reviews here on the blogger. or should we just send them to you vic? Let me know, thanks guys. I'm trying to figure out what to write for Tara's group now, done Lilys one already
Corrine
are we gonna post up our report reviews here on the blogger. or should we just send them to you vic? Let me know, thanks guys. I'm trying to figure out what to write for Tara's group now, done Lilys one already
Corrine
Sunday, May 4, 2008
Update on detecting keys script
Hey Bec
just a quick note letting you know that we actually have the button dectection script in AS3 :P and it wont work in main fla file, oh funs!! Good news is I've got to to AS2 now and it sems to be quite happy. Here is the script:
keylistener = new Object();
keylistener.onKeyDown = function() {
if (Key.getCode() == 49) {
trace("1 chaosButton()");
}
if (Key.getCode() == 52) {
trace("4 rainButton()");
}
if (Key.getCode() == 54) {
trace("6 dayButton()");
}
if (Key.getCode() == 56) {
trace("8 nightButton()");
}
if (Key.getCode() == 68) {
trace("d fluffyButton()");
}
if (Key.getCode() == 73) {
trace("i woolButton())");
}
if (Key.getCode() == 76) {
trace("l cushyButton()");
}
if (Key.getCode() == 77) {
trace("m clearButton()");
}
if (Key.getCode() == 80) {
trace("p birdButton()");
}
if (Key.getCode() == 86) {
trace("v sandpaperButton()");
}
if (Key.getCode() == 87) {
trace("w volUp()");
}
if (Key.getCode() == 88) {
trace("x volDown()");
}
}
Key.addListener(keylistener);
Gave to Michael to have alook at it, so hopefully he will make improvments/suggestions
okay..had enough of this BS, nites!
Saturday, May 3, 2008
detection Motion script ...so far
hey Ladiez,
I'll probably post this up on the news groups as well, but thought I'd let you know how (terrible :( ) the scripting is going lol..seriously...me stupid
oh...how do you attach actual files to this thing!?
------------------------------------------------------------------
Would it be possible to get some help on a piece of script? Is it even possible to do it this way? Suggestions?
What I want it to do:
Basically if the camera detects enough movement I want it to run a function once (person stands in front of our wall > cam detects movement > activates wall for interaction)
I've given up with the activityLevel tutorials that I've found on the web (they are a little too complicated for me…sad as that seems). So far I have flash detect if there is motion on a cam (a green/red light goes on and off depending on if you’re moving). If the light changes I can have it execute a function (ie goto a differ frame to start animations etc).
The problems:
When I run it, the camera automatically detects change and starts off green (thus executing the function to early)
- was thinking maybe I can set it so that only on the third light-flash it would run the function activateWall() ?
Any ideas what I could do that won’t be to complicated?
------------------------------------------------------------------
In Flash
(instances on stage : "vid" = video object , "light" = movieclip that has two frames 1. green + stop() /2. red)
script in first key:
// Declare Video instance on stage
var vid:Video;
// Declare MovieClip instances on the stage
var light:MovieClip;
// Create a reference to the camera
var userCam:Camera = Camera.get();
//create function to 'activate wall'
function activateWall() {
trace("stop repeating dammit!!!");
}
// Attach the camera output to Video instance
vid.attachVideo(userCam);
// set motionLevel and timer
userCam.setMotionLevel(80, 0500);
//not sure what this is supose to be doing, but works :P
userCam.onActivity = mx.utils.Delegate.create(this, onMotion);
function onMotion(isActive:Boolean) {
light.gotoAndStop( isActive ? 2: 1 );
activateWall();
//gotoAndStop(3)
}
------------------------------------------------------------------
I'll probably post this up on the news groups as well, but thought I'd let you know how (terrible :( ) the scripting is going lol..seriously...me stupid
oh...how do you attach actual files to this thing!?
------------------------------------------------------------------
Would it be possible to get some help on a piece of script? Is it even possible to do it this way? Suggestions?
What I want it to do:
Basically if the camera detects enough movement I want it to run a function once (person stands in front of our wall > cam detects movement > activates wall for interaction)
I've given up with the activityLevel tutorials that I've found on the web (they are a little too complicated for me…sad as that seems). So far I have flash detect if there is motion on a cam (a green/red light goes on and off depending on if you’re moving). If the light changes I can have it execute a function (ie goto a differ frame to start animations etc).
The problems:
When I run it, the camera automatically detects change and starts off green (thus executing the function to early)
- was thinking maybe I can set it so that only on the third light-flash it would run the function activateWall() ?
Any ideas what I could do that won’t be to complicated?
------------------------------------------------------------------
In Flash
(instances on stage : "vid" = video object , "light" = movieclip that has two frames 1. green + stop() /2. red)
script in first key:
// Declare Video instance on stage
var vid:Video;
// Declare MovieClip instances on the stage
var light:MovieClip;
// Create a reference to the camera
var userCam:Camera = Camera.get();
//create function to 'activate wall'
function activateWall() {
trace("stop repeating dammit!!!");
}
// Attach the camera output to Video instance
vid.attachVideo(userCam);
// set motionLevel and timer
userCam.setMotionLevel(80, 0500);
//not sure what this is supose to be doing, but works :P
userCam.onActivity = mx.utils.Delegate.create(this, onMotion);
function onMotion(isActive:Boolean) {
light.gotoAndStop( isActive ? 2: 1 );
activateWall();
//gotoAndStop(3)
}
------------------------------------------------------------------
ActionScriptMemo_01
This is to share the memo's I keep to keep myself on-track when I'm working on this. I'm hoping it will be helpful to Mishie too, just because I don't have a way of showing what I'm trying to do otherwise.
Time 1:10 – 3:24
Before anyone freaks about the code (that I will send to email accounts), there isn’t any functional code yet. What I’ve done is further refine the chart from before, categorising into what flash needs. Some are missing, because they’re not directly connected with the ones listed and a few only need the filter class as part of their function.
Time 4:39 – 7:26
So when I went to write out the function list, it turned into listing the ones needed as well as some of the code that I think may apply in there. The next step is to find relevant code and put it into the functions, where it should theoretically work. Dekker seemed to understand what I had started to write down and wrote down code in place of it, so I think if I can properly demonstrate how each button will react, he will be able to robot it out. What he wrote the other day was is making sense and may apply to a few functions, so bonus. Otherwise, it’s a lot of referring back to the pong example.
The code mainly deals with pulling images at the moment, but sounds have not been forgotten. In my head I’m doing it Animation Production StylE, where all the foley is done in the final stages.
Planning to come back after dinner, but previous experience suggests that I can only take ~4hours of this before my brain goes kaput.
Time 10:09 –
AHAHAHAHAHAHAHAtoomuchcaffiene.
Time to find the filter functions again.
Time 12:03
Time to stop for 2night. Keep going around in circles with my scriptSpeak, indicating that’s all I’ll get out of me 2day.Am coming to understand the BevelGradientFilter and how we can use it to effect. Tomorrow will probs be for making a simple sprout animation (KIS, dude) that we (through use of video editing) can activate and then the filters should be able to be stepped through using the walls night and day buttons to go to and from night and day at the very least. That’s how I think it’s going down, I’ll find out 2moro if that’s right or not ;)
Ciao Bambina’s!
Time 1:10 – 3:24
Before anyone freaks about the code (that I will send to email accounts), there isn’t any functional code yet. What I’ve done is further refine the chart from before, categorising into what flash needs. Some are missing, because they’re not directly connected with the ones listed and a few only need the filter class as part of their function.
Time 4:39 – 7:26
So when I went to write out the function list, it turned into listing the ones needed as well as some of the code that I think may apply in there. The next step is to find relevant code and put it into the functions, where it should theoretically work. Dekker seemed to understand what I had started to write down and wrote down code in place of it, so I think if I can properly demonstrate how each button will react, he will be able to robot it out. What he wrote the other day was is making sense and may apply to a few functions, so bonus. Otherwise, it’s a lot of referring back to the pong example.
The code mainly deals with pulling images at the moment, but sounds have not been forgotten. In my head I’m doing it Animation Production StylE, where all the foley is done in the final stages.
Planning to come back after dinner, but previous experience suggests that I can only take ~4hours of this before my brain goes kaput.
Time 10:09 –
AHAHAHAHAHAHAHAtoomuchcaffiene.
Time to find the filter functions again.
Time 12:03
Time to stop for 2night. Keep going around in circles with my scriptSpeak, indicating that’s all I’ll get out of me 2day.Am coming to understand the BevelGradientFilter and how we can use it to effect. Tomorrow will probs be for making a simple sprout animation (KIS, dude) that we (through use of video editing) can activate and then the filters should be able to be stepped through using the walls night and day buttons to go to and from night and day at the very least. That’s how I think it’s going down, I’ll find out 2moro if that’s right or not ;)
Ciao Bambina’s!
Subscribe to:
Posts (Atom)