Noises & Signals

Contemplations on creativity in our digital age

Author: richpath (page 2 of 3)

Max & FFT filter experimentation

For the audio transformation aspect of Signals and Stillness, I wanted to achieve an effect similar to one that I’ve worked with in Adobe Audition, where an FFT filter can be used to isolate particular frequencies, resulting in an ethereal musical-chord-like output created from any audio source (e.g., the “C Major Triad” preset of Audition’s FFT filter). I found a great starting point for this effect in the “Forbidden Planet” sketch of the main Max examples collection.

Specific values can be set in the multislider object to isolate frequencies using a “select” message – for example, sending the message “select 100 0.9” filters the output to frequencies between 4280-4320, close to a very high C# note. To have greater control over specific frequency choices, I’ve increased the value of the pfft sample size from 1024 to 8192.

Here’s an example of the resulting sound using the video clips as an audio source and creating random triads based on notes in a C major scale:

In the final piece, this filtered audio is routed into delay and reverb effects that change in their intensity based on the number of stationary observers and the length of viewers’ stillness. 

EDPX 4320 Interactive Art – initial project ideas

For my current EDP course, I’m focusing on an idea that I’ve been planning to work on for awhile (current working title: Signals and Stillness). Inspired by being in public spaces where I’m subjected to (usually multiple) television displays that I have no control over, the large display featured in the work will assail the viewer with various clips of commercial media – news reports, advertisements, sports coverage, daytime talk shows, etc. Unlike a normal TV, this one senses when someone is standing/sitting in front of it, and begins to respond to (and reward) the viewer’s lack of motion. The longer a viewer remains in stillness, the more the display changes, altering audio of the clips into a serene music-like soundtrack, and morphing the video scenes into abstract washes of colors and shapes. The goal is to encourage a meditative state of presence in the viewer – the “noise” of the rapidly changing content that normally demands their attention is transformed into an object of contemplation, reflection and curiosity.

A preliminary layout sketch of the work:

The basic programming workflow for the piece is illustrated in the diagram below:

Raspberry Pi objective #4 – Interaction with existing WS2801 LED project

Objective: Program a simple test in Python, running on a Raspberry Pi, to control a large LED display in real time.

A previous project I’ve created is a large 8′ x 4′ LED board that displays randomized abstract patterns, run by an Arduino-compatible chipKIT UNO32 microcontroller. An example of the visuals produced can be viewed here (this is video of a smaller 4′ x 4′ display, but the result is the same). This exercise explores how to control that same board using a Raspberry Pi programmed with Python.

Here is video document of the result:

Raspberry Pi interfacing with WS2801 LED display.

The code for this example is available via Github here (it’s the “joystick_move_test.py file”).

This program relies heavily on the Python Pygame library for the graphics, animation and joystick interfacing, and on the Adafruit Python WS2801 code to control the WS2801 LEDs via Python and the Adafruit Python GPIO library to interface the LEDs with the Pi’s pin outs. (These are the same Adafruit libraries that I’ve tried using in Processing’s Python mode on the Pi without success…but they work just fine in native Python.)

For the Pygame graphics and controller features, the following online tutorials were very valuable and used as a starting point:

As mentioned in the video, some visual glitching occasionally occurs on the LED board during the test. Using shorter (and perhaps thicker gauge) wires for the DATA and CLOCK connections between the Pi and the LEDs would likely alleviate or eliminate this issue.

Raspberry Pi objective #3 pt. 1 – Sound with Processing

In initial tests to transfer the Cloudscape display Processing sketch to run on the Pi, I encountered error issues when loading and playing back multiple audio events. The original sketch uses the Minim library which works great on other platforms and features easy controls for audio fading and amplitude analysis (to manipulate the intensity of the cloud LEDs as the audio files are playing). To further troubleshoot issues with Minim, and also test other audio library options, I created some Pi-based Processing sketches, which can be found here: https://github.com/richpath/Pi-audio-test. The README file explains the function and usage of each sketch.

First up in testing – the Minim library. Minim works fine for playing a limited number of audio files, but throws errors when multiple files are loaded for playback triggering:

Screen shot of Processing error on Raspberry Pi

This occurs when the spacebar is pressed to reload a new set of random audio files into each AudioSample variable. It appears that Minim has limitations on the Pi regarding how many total files can be loaded into AudioPlayer or AudioSample objects. Increasing the available memory for Processing to run the sketch doesn’t solve the issue. Other Pi users have reported similar experiences: https://forum.processing.org/two/discussion/24254/minim-large-audio-files-work-on-windows-not-on-raspberry-pi and https://forum.processing.org/two/discussion/21953/why-can-i-only-load-four-audio-files-in-minum. In the later posting, the use of the AudioSample object works for the person; however, they are loading fewer samples than required by my program. Also, Minim will play back audio only through the Pi’s built in headphone jack using PCM, which is lower quality than using a dedicated USB audio interface.

Next I tried the standard Sound library that is included with Processing v3. The results are better…multiple sound files can be loaded multiple times without error. However, as with Minim, the Sound library will only play audio via PCM through the headphone jack, and simultaneous playback of multiple files results in sonic glitching and heavy distortion. Even when a USB audio device is set as the default output hardware in the Pi’s preferences, the Sound library still reverts to the headphone jack. There is an “AudioDevice” method, but it doesn’t feature a variable to select particular system audio hardware. Also, the GitHub site for this library states that it’s no longer being actively developed, and a new version of library for Processing is currently in development. The newer version will hopefully address the audio hardware selection issue; in the meantime, I continue to look elsewhere for a functioning audio playback solution.

Part 2 will explore using the Beads library – a link will be provided here when that post is published.

p5 EDP logo project

edp_p5_screenshot_smThis sketch was created as a final project for the EDPX 4010 Emergent Digital Tools course . The assignment was to create a visual generative piece using p5.js that somehow incorporates the EDP program logo. The project will then be displayed on a large LCD screen in the program’s office as part of a collection of digital art works.

After experimenting (unsuccessfully) with other ideas based on inspiring sketches found on OpenProcessing, I took an approach for this project that would run over longer periods of time at a reliable frame rate, avoiding the speed limitations that I’ve encountered using p5 & JavaScript for more complex sketches. Each single letter of edp is controlled by noise-influenced parameters affecting their position, scale, hue and brightness. The descriptive text in the background is “painted over” by the larger letters, and will fade and shift position after about a minute. To achieve a clean trailing effect, rather than using a transparent background which causes a visual artifact of “ghost trail” images, I used a suggested technique of initially capturing a close-to-black image of the empty canvas, and then blending that image to each new frame using the DIFFERENCE (subtractive) blending mode.

The project can be viewed in its full 1920×1080 dimensions here, and the p5 JS source code can be viewed here. The target display that this will be running on is rotated 90 degrees clockwise, hence why this sketch appears sideways here.

The Unbearable Slowness of p5-ing

evol1…well, “unbearable” might be a stretch. But when compared to running native Processing sketches in Java, the difference is certainly noticeable. While experimenting with potential approaches for the final generative art project in EDPX 4010, I was very inspired by Asher Salomon’s “Evolution” sketch in OpenProcessing: https://www.openprocessing.org/sketch/15839, which created a very cool watercolor-esque painting effect. My initial attempts at porting Salomon’s code to p5/JavaScript resulted in an abysmally slow rendering rate…nothing even close to the original sketch. After a lot of further exploration and changes, I finally reached a more respectable speed, though not something that could be used in a large canvas of 1920×1080. My p5 results can be viewed here, and the source code is here. Some important discoveries:

1 – Instead of the get() and set() methods used in the original (which are still available in p5/JS), getting and setting pixel colors via the pixels[] array is faster. loadPixels() and updatePixels() need to be included for this approach to work. Dan Schiffman’s “Pixel Array” tutorial video was incredibly helpful in understanding this process – in particular, setting the pixelDensity value to 1 for my Mac’s retina display was a trick that I did not find documented elsewhere.

2 – Instead of using the floor() and constrain() methods in p5, using the native JavaScript Math.floor, Math.min & Math.max object methods performed faster by a substantial margin!

There are probably more optimization tweaks to be made to the p5 version – to be continued…

 

“We Become What We Behold”

I’ve been a fan of Nicky Case’s work for awhile, especially his Parable of the Polygons and Neurotic Neurons projects. His most recent work, “We Become What We Behold“, is the latest addition to his thoughtful and entertaining online collection.

The “game” is described as being about “news cycles, vicious cycles, infinite cycles.” To me, it’s an inspiring example of intentional…I might even say “activist”…web-based art, especially given the outcome of the recent national election and thinking about the media’s role in feeding fears and influencing voters’ choices. I initially felt that Case was being too extreme in the portrayal of media sensationalism, but his blog post about this project sheds a bit more light on where he’s coming from. He describes how his fellowship with the PBS program Frontline provided him with a deeper perspective on the lure of “clickbait” and how journalists struggle with it – the sensationalist stories end up getting the most attention from the general public. Even if you don’t agree with his viewpoint about “the media”, the experience of the game provides an interesting catalyst for conversations about the the effect of these cycles on our society and culture. It’s also great that Case has made the code for this (and his other projects) openly available for other developers to play with and freely remix.

Arduino/MicroView sound file controller & looper

A recent challenge in the EDPX 4010 course was to connect an Arduino device via a serial port to control a p5.js sketch. In this case, we’re working with the Arduino-compatible MicroView module that is included in this SparkFun Inventor’s Kit.  I wanted to explore the p5 sound library further, so I made a simple device that controls the playback speed of an audio file (between 0 – 3x) with a potentiometer, and can also loop a chosen section of the audio file using pushbutton controls.

microview_project_smp5_arudino_screenshot
Pressing the black button sets the start point of the looped segment, and the red button sets the end point and begins the looped playback of that segment. Pressing the red button again will set another end point in the loop and shorten the looped segment even more, and the black button will stop the looping and continue the playback normally. The MicroView screen displays the playback speed of the audio and the status of the black and red buttons. The p5 screen (above right) displays the current playback rate, whether looping is on or off (true or false), the status of the pushbuttons, the start (cued) time of the loop, and the current time of the audio file’s playback. The size of the yellow circle changes based on the playback rate. The p5 source code for the project is available here, and the MicroView/Arduino source code is here.

For the serial port connection, I used the p5.serialport library, and also the p5.serialcontrol GUI application to perform the actual serial communication, since JavaScript in a browser can not interact directly with a serial port. To run this sketch, you must first open the serialcontrol application and then run the p5 sketch. Basically, the MicroView is sending three values as a comma-separated text string through the serial port: the “digitalRead” state of the two buttons (0 or 1), and the “analogRead” value of the potentiometer, mapped to 0-255. The p5 sketch receives this text and parses the values with the split command, separating by commas. The sketch then uses those values to affect the playback speed and looping parameters. It also contains some logic checks to prevent repeated triggering if a button is held down, so that a held push is registered as only one push and is not continually changing values (this technique is known as “state change” or “edge” detection).

Some glitches with the p5.sound library – before the playback of a loop begins, the library first stops the playing state, sets the loop cue times, and then restarts playing, which creates an audible short pause in the process. Also, I initially had the potentiometer control the direction as well as the speed, so that the audio could be played in reverse. However, the library seems to reset the playback point to the beginning of the file before it begins the reverse playback, so the forwards/backwards control does not sound seemless, always starting from the same point in the file. I’m interested in digging further into the code of the library itself to see if I can change that behavior.

400 robot heads

Assignment for 4010 course: create a grid of robot heads, 20×20, with four variations shifting between rows or columns. The center four should “make a robot sound when clicked”. If you click on the center four figures in this sketch, you’ll hear a random quote spoken in synthesized speech, via the p5.speech library.

The single eye of each head also follows the mouse location, utilizing p5’s “constrain” function. Source code available here. The quotes were selected from this collection.

3D sound possibilities

As spotted recently on the “prosthetic knowledge” tumblr site – the Holographic Whisper three-dimensional spatial audio speaker system. (The slightly-over-the-top-futuristic-tech-style promotional video is included below…)

The creators propose a “sound-point method” that enables control of “aerial audio distributions more flexibly and more precisely in comparison with conventional superdirectional (sound-beam) loudspeakers. This method can generate and vanish the sound sources freely in air. These point sound sources can deliver private messages or music to individuals.” Unfortunately, there is no clear link to the mentioned research paper, and it doesn’t look like a prototype has been developed at this point. But it certainly warrants further exploration – I’ve been intrigued for awhile with the idea of creating a sonic installation in a space that could record the voices of attendees, and then play back segments of those recordings to future attendees with the audio being targeted (to be heard) at the same spatial location that the voices were recorded…a sonically “haunted” room filled with the voices of ghosts from past visitors.

Older posts Newer posts

© 2024 Noises & Signals

Theme by Anders NorenUp ↑