Noises & Signals

Contemplations on creativity in our digital age

Author: richpath (page 1 of 4)

Protected: Signals and Stillness – final course presentation

This content is password protected. To view it please enter your password below:

Signals and Stillness – installation overview

Signals and Stillness – installation demonstration video. Use this password to view: currents2019.

Summary

Signals and Stillness was created in response to the bombardment of commercial media that we frequently experience from television displays in public settings. The piece reverses the relationship between the viewer and broadcast content that is intent on influencing and demanding the viewer’s attention. The large monitor in the piece “recognizes” and responds to the prolonged stationary presence of an observer – the rapidly switching excerpts of advertisements, news, sports and talk shows begin to morph into their abstracted visual and sonic essence. Multiple viewers observing in stillness influence the effect further, revealing deeper layers and patterns of abstraction. The piece explores choices of what we give our attention to – a stream of information overload, or our “present presence”.

Silhouette of viewer in front of Signals and Stillness projected display.
Viewing in stillness.
Full view of installation area.
Three observing area circles on the floor are illuminated from the ceiling. 
3 viewers observing the display.
Multiple viewers observing the changes.

Longer description

The centerpiece of the installation is a large television monitor (between 55”-65”), elevated 5-6 feet from the floor, or a large projected display. In its default state, the display features rapidly changing clips from current advertisements, news stories, daytime talkshows and sports highlights, each played at normal speed (with normal audio) for a few seconds before switching to the next clip.

Three circles, each around 24 inches in diameter, are illuminated on the floor in front of the display by LED spotlights suspended from the ceiling. When a viewer remains still in one of the circles, their presence is detected by the system, and the display’s visuals begin to change in reaction – the playback speed slows, and the original video images are gradually transformed into abstract scenes of ethereal light and color shapes and patterns. The audio morphs into synth-like musical chords through the use of FFT filtering, delays and reverb effects. The longer a viewer remains within a circle, the more pronounced these effects become. When 2 or 3 people are standing still in the circles, additional visual and audio effects begin to appear, encouraging an experimentation with group stillness.

The installation is operated by custom programs run using Max software on a dedicated laptop/computer. The audio portion can be presented either using near field active monitors placed directly on each side of the observation circles area, or through headphones provided to the viewers within each of the circles.

Protected: Signals and Stillness – exhibition submission documentation

This content is password protected. To view it please enter your password below:

Protected: EDPX4320 Project review/soft crit

This content is password protected. To view it please enter your password below:

Max & FFT filter experimentation

For the audio transformation aspect of Signals and Stillness, I wanted to achieve an effect similar to one that I’ve worked with in Adobe Audition, where an FFT filter can be used to isolate particular frequencies, resulting in an ethereal musical-chord-like output created from any audio source (e.g., the “C Major Triad” preset of Audition’s FFT filter). I found a great starting point for this effect in the “Forbidden Planet” sketch of the main Max examples collection.

Specific values can be set in the multislider object to isolate frequencies using a “select” message – for example, sending the message “select 100 0.9” filters the output to frequencies between 4280-4320, close to a very high C# note. To have greater control over specific frequency choices, I’ve increased the value of the pfft sample size from 1024 to 8192.

Here’s an example of the resulting sound using the video clips as an audio source and creating random triads based on notes in a C major scale:

In the final piece, this filtered audio is routed into delay and reverb effects that change in their intensity based on the number of stationary observers and the length of viewers’ stillness. 

EDPX 4320 Interactive Art – initial project ideas

For my current EDP course, I’m focusing on an idea that I’ve been planning to work on for awhile (current working title: Signals and Stillness). Inspired by being in public spaces where I’m subjected to (usually multiple) television displays that I have no control over, the large display featured in the work will assail the viewer with various clips of commercial media – news reports, advertisements, sports coverage, daytime talk shows, etc. Unlike a normal TV, this one senses when someone is standing/sitting in front of it, and begins to respond to (and reward) the viewer’s lack of motion. The longer a viewer remains in stillness, the more the display changes, altering audio of the clips into a serene music-like soundtrack, and morphing the video scenes into abstract washes of colors and shapes. The goal is to encourage a meditative state of presence in the viewer – the “noise” of the rapidly changing content that normally demands their attention is transformed into an object of contemplation, reflection and curiosity.

A preliminary layout sketch of the work:

The basic programming workflow for the piece is illustrated in the diagram below:

Raspberry Pi objective #4 – Interaction with existing WS2801 LED project

Objective: Program a simple test in Python, running on a Raspberry Pi, to control a large LED display in real time.

A previous project I’ve created is a large 8′ x 4′ LED board that displays randomized abstract patterns, run by an Arduino-compatible chipKIT UNO32 microcontroller. An example of the visuals produced can be viewed here (this is video of a smaller 4′ x 4′ display, but the result is the same). This exercise explores how to control that same board using a Raspberry Pi programmed with Python.

Here is video document of the result:

Raspberry Pi interfacing with WS2801 LED display.

The code for this example is available via Github here (it’s the “joystick_move_test.py file”).

This program relies heavily on the Python Pygame library for the graphics, animation and joystick interfacing, and on the Adafruit Python WS2801 code to control the WS2801 LEDs via Python and the Adafruit Python GPIO library to interface the LEDs with the Pi’s pin outs. (These are the same Adafruit libraries that I’ve tried using in Processing’s Python mode on the Pi without success…but they work just fine in native Python.)

For the Pygame graphics and controller features, the following online tutorials were very valuable and used as a starting point:

As mentioned in the video, some visual glitching occasionally occurs on the LED board during the test. Using shorter (and perhaps thicker gauge) wires for the DATA and CLOCK connections between the Pi and the LEDs would likely alleviate or eliminate this issue.

Raspberry Pi objective #3 pt. 1 – Sound with Processing

In initial tests to transfer the Cloudscape display Processing sketch to run on the Pi, I encountered error issues when loading and playing back multiple audio events. The original sketch uses the Minim library which works great on other platforms and features easy controls for audio fading and amplitude analysis (to manipulate the intensity of the cloud LEDs as the audio files are playing). To further troubleshoot issues with Minim, and also test other audio library options, I created some Pi-based Processing sketches, which can be found here: https://github.com/richpath/Pi-audio-test. The README file explains the function and usage of each sketch.

First up in testing – the Minim library. Minim works fine for playing a limited number of audio files, but throws errors when multiple files are loaded for playback triggering:

Screen shot of Processing error on Raspberry Pi

This occurs when the spacebar is pressed to reload a new set of random audio files into each AudioSample variable. It appears that Minim has limitations on the Pi regarding how many total files can be loaded into AudioPlayer or AudioSample objects. Increasing the available memory for Processing to run the sketch doesn’t solve the issue. Other Pi users have reported similar experiences: https://forum.processing.org/two/discussion/24254/minim-large-audio-files-work-on-windows-not-on-raspberry-pi and https://forum.processing.org/two/discussion/21953/why-can-i-only-load-four-audio-files-in-minum. In the later posting, the use of the AudioSample object works for the person; however, they are loading fewer samples than required by my program. Also, Minim will play back audio only through the Pi’s built in headphone jack using PCM, which is lower quality than using a dedicated USB audio interface.

Next I tried the standard Sound library that is included with Processing v3. The results are better…multiple sound files can be loaded multiple times without error. However, as with Minim, the Sound library will only play audio via PCM through the headphone jack, and simultaneous playback of multiple files results in sonic glitching and heavy distortion. Even when a USB audio device is set as the default output hardware in the Pi’s preferences, the Sound library still reverts to the headphone jack. There is an “AudioDevice” method, but it doesn’t feature a variable to select particular system audio hardware. Also, the GitHub site for this library states that it’s no longer being actively developed, and a new version of library for Processing is currently in development. The newer version will hopefully address the audio hardware selection issue; in the meantime, I continue to look elsewhere for a functioning audio playback solution.

Part 2 will explore using the Beads library – a link will be provided here when that post is published.

p5 EDP logo project

edp_p5_screenshot_smThis sketch was created as a final project for the EDPX 4010 Emergent Digital Tools course . The assignment was to create a visual generative piece using p5.js that somehow incorporates the EDP program logo. The project will then be displayed on a large LCD screen in the program’s office as part of a collection of digital art works.

After experimenting (unsuccessfully) with other ideas based on inspiring sketches found on OpenProcessing, I took an approach for this project that would run over longer periods of time at a reliable frame rate, avoiding the speed limitations that I’ve encountered using p5 & JavaScript for more complex sketches. Each single letter of edp is controlled by noise-influenced parameters affecting their position, scale, hue and brightness. The descriptive text in the background is “painted over” by the larger letters, and will fade and shift position after about a minute. To achieve a clean trailing effect, rather than using a transparent background which causes a visual artifact of “ghost trail” images, I used a suggested technique of initially capturing a close-to-black image of the empty canvas, and then blending that image to each new frame using the DIFFERENCE (subtractive) blending mode.

The project can be viewed in its full 1920×1080 dimensions here, and the p5 JS source code can be viewed here. The target display that this will be running on is rotated 90 degrees clockwise, hence why this sketch appears sideways here.

The Unbearable Slowness of p5-ing

evol1…well, “unbearable” might be a stretch. But when compared to running native Processing sketches in Java, the difference is certainly noticeable. While experimenting with potential approaches for the final generative art project in EDPX 4010, I was very inspired by Asher Salomon’s “Evolution” sketch in OpenProcessing: https://www.openprocessing.org/sketch/15839, which created a very cool watercolor-esque painting effect. My initial attempts at porting Salomon’s code to p5/JavaScript resulted in an abysmally slow rendering rate…nothing even close to the original sketch. After a lot of further exploration and changes, I finally reached a more respectable speed, though not something that could be used in a large canvas of 1920×1080. My p5 results can be viewed here, and the source code is here. Some important discoveries:

1 – Instead of the get() and set() methods used in the original (which are still available in p5/JS), getting and setting pixel colors via the pixels[] array is faster. loadPixels() and updatePixels() need to be included for this approach to work. Dan Schiffman’s “Pixel Array” tutorial video was incredibly helpful in understanding this process – in particular, setting the pixelDensity value to 1 for my Mac’s retina display was a trick that I did not find documented elsewhere.

2 – Instead of using the floor() and constrain() methods in p5, using the native JavaScript Math.floor, Math.min & Math.max object methods performed faster by a substantial margin!

There are probably more optimization tweaks to be made to the p5 version – to be continued…

 

Older posts

© 2019 Noises & Signals

Theme by Anders NorenUp ↑