A sketch playing with the noise function of p5 and the sound library. The vertical positioning of each sphere, as well as the frequency of its oscillator, is shifted by stepping through the Y value of a noise sequence. The Z value of the noise sequence affects the diameter of the sphere and the amplitude of its oscillator.
Add the initial “soundsphere” by pressing the right arrow. Use the left & right arrow keys to delete or add more spheres. (You can add up to 20.)
Use the up and down arrows to adjust the speed of the step rate through the noise value.
Use the ‘a’ and ‘s’ keys to move the selector left or right (selector is displayed at the bottom).
Use the spacebar to change the wave type of the selected sphere (wave type will be displayed at the top right).
You can change the frequency range of the oscillator wave for the selected sphere using the following keys:
1 = Decrease lower limit of range by 50 Hz
2 = Increase lower limit of range by 50 Hz
3 = Decrease lower limit of range by 200 Hz
4 = Increase lower limit of range by 200 Hz
7 = Decrease lower limit of range by 200 Hz
8 = Increase lower limit of range by 200 Hz
9 = Decrease lower limit of range by 50 Hz
0 = Increase lower limit of range by 50 Hz
The lowest frequency limit is 60 Hz and the highest is 15 kHz.
The sketch will probably run more smoothly if you view it on its own page here. Source code available here and is commented in detail. One issue I’m noticing for future investigation – some audio “clicking” distortion occurs in certain frequency ranges.
My latest experiment sketch with p5.js involving work with arrays. You will probably need to click on the image to activate it. Use the “t”,”s” and “c” keys on your keyboard to add a triangle, square or circle to the scene (size and color are randomized).
The sketch can be viewed on a separate page here (which will probably perform at a faster frame rate), and the p5 source code is available here. Each shape is added as an independent object to the master shape array. The number of elements currently in the master array is displayed at the top left. As the shapes fall and are shuttled off to the left or right (again, a random choice), they are “spliced” from the array after they leave the screen, hence why the count goes down.
This sketch also makes use of the p5.scribble library, which gives the shapes their jagged, “sketchy” appearance. If you un-comment the “randomSeed” statement in line 15 of the code, this will stop the animation of the jagged-ness, since the randomization used for that effect (in the p5.scribble code) is then “seeded” continually with the same number. (This number could be anything…not just “98”.)
A slight diversion of focus for this post…this past Friday (Oct. 21st), there were large distributed denial-of-service attacks (targeted at servers maintained by the company Dyn) which affected many major sites, including Netflix and Twitter. It appears that thousands of the DDoS sources included “internet of things” devices like webcams…and some of those are now being recalled: Webcams used to attack Reddit and Twitter recalled – http://www.bbc.com/news/technology-37750798
The Chinese electronics manufacturer Hangzhou Xiongmai stated that many of their cameras could be easily hacked since users didn’t bother to change the default password on their devices. A bigger issue is that some devices don’t even allow users to change a default password. The BBC article states “Security costs money and electronics firms want to make their IoT device as cheap as possible. Paying developers to write secure code might mean a gadget is late to market and is more expensive. Plus enforcing good security on these devices can make them harder to use – again that might hit sales.”
…So, they get a chance of a slightly greater profit margin at the risk of a massive cyberattack that knocks out hugely popular websites used daily by millions of people? And they risk the enormous expense of needing to recall and upgrade their devices after such an attack occurs? Hmm…lesson learned??
…so it’s not exactly an “extreme” or “ultimate” game of pong, but the challenge led me to try some previously unexplored functions in p5.js. This version currently works only with touch screens and is sized specifically for an iPad air: http://blog.rich-path.com/p5/pong/ (p5 source code is available here)
Particularly interesting elements of developing the game for me were:
Coding the scoring system. Points are gained by each player when the small light blue marble “ball” hits one of the 3 larger marbles in the center (based on the direction of the ball). The 2 outer targets are worth 10 points, the middle is worth 20. The numbers display the accumulated score for each player; however, if a player misses the ball with their paddle, they lose all of the points that they scored during that round (their score resets to that of the previous round). The target hit detection makes use of p5’s dist() function.
Using the oscillators and envelopes in the p5.sound library. This examplewas particularly helpful in getting started with sound generation, though I by-passed the MIDI note conversion and am providing the oscillator with a specific frequency value. Also, to prevent an audible note from immediately playing at the start of the game, the oscillator is creating a super-low frequency of 1 Hz until a target is hit and a different tone is played…a bit of a hack until I come up with a better solution.
Use of the built-in touches array to detect fingers on a touch screen. I found the short example sketch listed in the first response from lmccart on this page to be quite useful in figuring out how to track (and limit the number of) touches on a screen. The paddles and ball won’t move unless at least one finger is on either side.
Changes in velocity depending on which part of the paddle hits the ball. Basically using a combination of the map() function and some hit-or-miss experimentation with calculations for this – the ball will move slower and more vertically straight when it hits the center of a paddle, and the angle and speed increases greatly towards the edges.
Along with all of the great examples available on the (newly re-vamped?) OpenProcessing site, I stumbled upon this inspiring collection in Tumblr – “Experiments in Processing” http://p5art.tumblr.com/. I’m not sure who’s responsible for the site, and there are many sketches that are quite similar to those posted on OpenProcessing. But for me, this is a particularly attractive group of intriguing experiments to explore and learn techniques from. In particular, the fireflies example turned me onto Dan Schiffman’s “Metaballs” coding challenge tutorial on YouTube – high on my “must watch soon” list.
In this small sketch, four separate landscape ranges are created in a for loop (counting down the variable i from 5, so that the largest range gets drawn first to appear furthest in the background layers). The sin() function is fed with an ever slightly increasing amount (+.06 each frame) from the a variable (responsible for the wave shapes) and the changing i variable from the for loop, responsible for the height (and thus perspective depth) of each range. The a variable is also used to modulate the background color that slowly shifts from a blue sky to black night.
I tried to find more video documentation of the VIA performance event in Pittsburgh from this past weekend (thanks to Chris Coleman, who performed in the lineup, for bringing this to my attention). The only clips I’ve uncovered thus far are posted on the event’s Facebook page: https://www.facebook.com/VIA.HQ/videos?ref=page_internal. Most of them are, unfortunately, abysmal in their audio and visual quality, but at least they provide a sampling of what some of the live performances were like. From that small collection, I’m mostly attracted to the clips from Rabit, possibly since the visuals are more abstract and appeared to sync in their changes more directly with the music, as opposed to simply being a decorative backdrop for the music. Makes me think further about the fine balance between those elements – I’ve seen “multimedia” events where the projected visuals feel like they really augment the music (I think mostly of larger concert performances from bands like Porcupine Tree or Pink Floyd/Roger Waters), and other performances where it’s just not working…one element feels like it’s distracting from the other. I’m hoping to find more reflections written by either artists or attendees of VIA to get a better sense of people’s reactions.
A small experiment using p5.js – create a loop of ellipses drawen in random sizes and colors that rise to the top of the canvas at random speeds, and move slightly left & right.
The larger non-iframe version can be viewed here. The p5 code can be viewed here.
An issue to fix – once each ellipse moves beyond the top of the canvas (based on 0 minus current ellipse height), it should be moved back below the bottom (the canvas height value plus the current height of the ellipse), and then rise again from there. However, some of the shapes don’t get reset below the bottom of the canvas…they just appear in the visible lower portion and then begin their ascent. This causes a “popcorn” effect visually…more exploration is needed to figure out why this is happening…
The final version (…at least for now…) of my video “challenging current technical pathways”: SAFE
The password for viewing is the same password that you used to access this blog post.
During the in-class critique last Friday (10/7), it became clear to me that the length of both sections – the collection of narrated accounts in the first half and the fictional ad in the second – had to be shortened. Given the deadline for the project and the footage that I had available to work with, it made more sense to keep the focus of this hypothetical technology related to children and their “protection”, instead of immediately including applications for adults as well. The news story screenshot montage at the beginning is still too “visually basic” for my tastes…perhaps that will be reworked in further exploration of video effects and filters (ah, the hat of compromise that sometimes must be donned when one hits a deadline…)
Most of the footage used in the ad section was shot during the Supernova outdoor animation festival, Sept. 24 in Denver, CO. The music track used for the ad is by Lusine ICL – “Headwind” from the album Language Barrier.
In considering how to present the second scene in my short video project – the ad promoting the promise of the self-recording technology, I know that I want to avoid any overall voice narration. The Google “Year in Search” videos rely heavily on text interspersed between video segments…I like this approach, and it made me think of another ad campaign from the late 80s that intrigued me as a teenager:
Oooooo, the luring questions that appear as simple text, with the promise of answers and revelations provided in the book. Fortunately, after reading it (and even being interested in its contents), I did not become a Scientologist. (And in fact, it made me curious enough to read an anti-Scientology book that exposed the hypocrisy and abuse that existed in the “church”.) At this point, however, it serves as inspiration for a visual approach…