Noises & Signals

Contemplations on creativity in our digital age

Category: Uncategorized

Raspberry Pi objective #4 – Interaction with existing WS2801 LED project

Objective: Program a simple test in Python, running on a Raspberry Pi, to control a large LED display in real time.

A previous project I’ve created is a large 8′ x 4′ LED board that displays randomized abstract patterns, run by an Arduino-compatible chipKIT UNO32 microcontroller. An example of the visuals produced can be viewed here (this is video of a smaller 4′ x 4′ display, but the result is the same). This exercise explores how to control that same board using a Raspberry Pi programmed with Python.

Here is video document of the result:

Raspberry Pi interfacing with WS2801 LED display.

The code for this example is available via Github here (it’s the “joystick_move_test.py file”).

This program relies heavily on the Python Pygame library for the graphics, animation and joystick interfacing, and on the Adafruit Python WS2801 code to control the WS2801 LEDs via Python and the Adafruit Python GPIO library to interface the LEDs with the Pi’s pin outs. (These are the same Adafruit libraries that I’ve tried using in Processing’s Python mode on the Pi without success…but they work just fine in native Python.)

For the Pygame graphics and controller features, the following online tutorials were very valuable and used as a starting point:

As mentioned in the video, some visual glitching occasionally occurs on the LED board during the test. Using shorter (and perhaps thicker gauge) wires for the DATA and CLOCK connections between the Pi and the LEDs would likely alleviate or eliminate this issue.

Raspberry Pi objective #3 pt. 1 – Sound with Processing

In initial tests to transfer the Cloudscape display Processing sketch to run on the Pi, I encountered error issues when loading and playing back multiple audio events. The original sketch uses the Minim library which works great on other platforms and features easy controls for audio fading and amplitude analysis (to manipulate the intensity of the cloud LEDs as the audio files are playing). To further troubleshoot issues with Minim, and also test other audio library options, I created some Pi-based Processing sketches, which can be found here: https://github.com/richpath/Pi-audio-test. The README file explains the function and usage of each sketch.

First up in testing – the Minim library. Minim works fine for playing a limited number of audio files, but throws errors when multiple files are loaded for playback triggering:

Screen shot of Processing error on Raspberry Pi

This occurs when the spacebar is pressed to reload a new set of random audio files into each AudioSample variable. It appears that Minim has limitations on the Pi regarding how many total files can be loaded into AudioPlayer or AudioSample objects. Increasing the available memory for Processing to run the sketch doesn’t solve the issue. Other Pi users have reported similar experiences: https://forum.processing.org/two/discussion/24254/minim-large-audio-files-work-on-windows-not-on-raspberry-pi and https://forum.processing.org/two/discussion/21953/why-can-i-only-load-four-audio-files-in-minum. In the later posting, the use of the AudioSample object works for the person; however, they are loading fewer samples than required by my program. Also, Minim will play back audio only through the Pi’s built in headphone jack using PCM, which is lower quality than using a dedicated USB audio interface.

Next I tried the standard Sound library that is included with Processing v3. The results are better…multiple sound files can be loaded multiple times without error. However, as with Minim, the Sound library will only play audio via PCM through the headphone jack, and simultaneous playback of multiple files results in sonic glitching and heavy distortion. Even when a USB audio device is set as the default output hardware in the Pi’s preferences, the Sound library still reverts to the headphone jack. There is an “AudioDevice” method, but it doesn’t feature a variable to select particular system audio hardware. Also, the GitHub site for this library states that it’s no longer being actively developed, and a new version of library for Processing is currently in development. The newer version will hopefully address the audio hardware selection issue; in the meantime, I continue to look elsewhere for a functioning audio playback solution.

Part 2 will explore using the Beads library – a link will be provided here when that post is published.

At what risk?

A slight diversion of focus for this post…this past Friday (Oct. 21st), there were large distributed denial-of-service attacks (targeted at servers maintained by the company Dyn) which affected many major sites, including Netflix and Twitter. It appears that thousands of the DDoS sources included “internet of things” devices like webcams…and some of those are now being recalled:
Webcams used to attack Reddit and Twitter recalled – http://www.bbc.com/news/technology-37750798
The Chinese electronics manufacturer Hangzhou Xiongmai stated that many of their cameras could be easily hacked since users didn’t bother to change the default password on their devices. A bigger issue is that some devices don’t even allow users to change a default password. The BBC article states “Security costs money and electronics firms want to make their IoT device as cheap as possible. Paying developers to write secure code might mean a gadget is late to market and is more expensive. Plus enforcing good security on these devices can make them harder to use – again that might hit sales.”

…So, they get a chance of a slightly greater profit margin at the risk of a massive cyberattack that knocks out hugely popular websites used daily by millions of people? And they risk the enormous expense of needing to recall and upgrade their devices after such an attack occurs? Hmm…lesson learned??

A peek into the 2016 VIA Festival in Pittburgh…at a distance via Facebook

I tried to find more video documentation of the VIA performance event in Pittsburgh from this past weekend (thanks to Chris Coleman, who performed in the lineup, for bringing this to my attention). The only clips I’ve uncovered thus far are posted on the event’s Facebook page: https://www.facebook.com/VIA.HQ/videos?ref=page_internal. Most of them are, unfortunately, abysmal in their audio and visual quality, but at least they provide a sampling of what some of the live performances were like. From that small collection, I’m mostly attracted to the clips from Rabit, possibly since the visuals are more abstract and appeared to sync in their changes more directly with the music, as opposed to simply being a decorative backdrop for the music. Makes me think further about the fine balance between those elements – I’ve seen “multimedia” events where the projected visuals feel like they really augment the music (I think mostly of larger concert performances from bands like Porcupine Tree or Pink Floyd/Roger Waters), and other performances where it’s just not working…one element feels like it’s distracting from the other. I’m hoping to find more reflections written by either artists or attendees of VIA to get a better sense of people’s reactions.

 

p5.js Animation Experiment – Colorful “Bubbles”

A small experiment using p5.js – create a loop of ellipses drawen in random sizes and colors that rise to the top of the canvas at random speeds, and move slightly left & right.

The larger non-iframe version can be viewed here. The p5 code can be viewed here.

An issue to fix – once each ellipse moves beyond the top of the canvas (based on 0 minus current ellipse height), it should be moved back below the bottom (the canvas height value plus the current height of the ellipse), and then rise again from there. However, some of the shapes don’t get reset below the bottom of the canvas…they just appear in the visible lower portion and then begin their ascent. This causes a “popcorn” effect visually…more exploration is needed to figure out why this is happening…

“SAFE” video project

The final version (…at least for now…) of my video “challenging current technical pathways”: SAFE

The password for viewing is the same password that you used to access this blog post.

During the in-class critique last Friday (10/7), it became clear to me that the length of both sections – the collection of narrated accounts in the first half and the fictional ad in the second – had to be shortened. Given the deadline for the project and the footage that I had available to work with, it made more sense to keep the focus of this hypothetical technology related to children and their “protection”, instead of immediately including applications for adults as well. The news story screenshot montage at the beginning is still too “visually basic” for my tastes…perhaps that will be reworked in further exploration of video effects and filters (ah, the hat of compromise that sometimes must be donned when one hits a deadline…)

Most of the footage used in the ad section was shot during the Supernova outdoor animation festival, Sept. 24 in Denver, CO. The music track used for the ad is by Lusine ICL – “Headwind” from the album Language Barrier.

Freedom of choice

I’ve recently become a frequent listener to the podcast “Waking Up”, created by the American author/philosopher/neuroscientist/atheist-at-large Sam Harris, and had the opportunity to hear a particularly good episode last week. Recorded in July of this year, Harris conducts a conversation with David Krakauer, currently the president and William H. Miller Professor of Complex Systems at the Santa Fe Institute, that addresses information, complexity and “the future of humanity”. One particular segment of the almost two-hour discussion caught my attention related to topics we’ve been talking about in the 4010 class and issues that Douglas Rushkoff addresses in his “Program or Be Programmed” book. I’ve created an MP3 of this particular 11-minute excerpt that can be downloaded/listened to here. Krakauer begins by stating his concern over the growing “systematic erosion of human free will”, and cites examples of our (potential) surrender of freedom of choice to online utilities that we commonly interact with – Netflix suggesting things for us to watch or Amazon presenting books we should read, based on an amalgamation of our previous data and algorithms comparing our tastes to “others like us”. We can always say “no thank you”, but it gets harder and harder when a convenient curated selected is featured front and center for us. Krakauer is concerned that this curation process ends up contracting the “volume” of one’s free choice. After some discussion with Harris exploring the advantages and disadvantages of this approach, compared to the previous methods of discovering various writings or music (finding books by happenstance while browsing in a bookshop, or simply choosing albums based on their cool covers), Harris emphasizes (around 8 min. 30 sec.) that he is not calling for a return to the “ghettos of the past”, but argues that “the tools we have now that are so incredible should be allowing us to have freedoms that are unprecedented.” The selection available on Amazon is fantastic, but what comes along with it is “this all-seeing eye that wants to impose, out of largely economic considerations, constraints on what you do. It’s our job to maintain the freedom of the technology.” “Let’s fight the instinct of the technology to treat us as a nuisance in a machine-learning algorithm that would want to be able to predict us perfectly…let’s surprise it.”

I’ve been turned on to some great movies, documentaries, books and music based on automated recommendations…”people who like this have watched…”, “others have purchased…” But it’s reasonable to doubt that these collections of suggestions are truly presented with my interests at heart. What other factors and influences are behind the selections (e.g. especially if Netflix has just poured millions into the production of an original show that they are pushing to be successful)? Who are the creators that get shut out in this process (along with their works)? And, if it’s super convenient to simply check out the small lineup of something similar, where is the encouragement to explore something that challenges one’s taste further or that presents something radically different altogether? Hearing the Harris & Krakauer discussion made me think of a piece of music that I hadn’t heard in awhile – one that encompasses the message in a 3-and-a-half-minute new wave song: Devo’s “Freedom of Choice”.

Lyrics are here…the final chorus is:
“Freedom of choice
Is what you got
Freedom from choice
Is what you want”
(the original video is an amusing artifact of the early 80s as well, with a growing crowd of people becoming assimilated clones at the end…)

Reflections on SUPERNOVA (9/24/16)

This past Saturday, the Education Forum portion of the SUPERNOVA outdoor festival of digital animation and art in Denver featured presentations from the festival’s jurors – Morehshin Allahyari, Jonathan Monaghan and Claudia Mate, as well as an overview of the Denver Digerati group provided by Ivar Zeile. Some notable take-aways for me included the following:

Ivar discussed how it was initially difficult to find video content produced in a format that utilized the full dimensions of some of the larger screens, as well as difficult to find pieces by Colorado artists working in the animation/digital art genre (in an effort to keep earlier showcases more local in their focus).

The comparison of Claudia Mate’s fast production process with that of Jonathan Monaghan’s almost obsessive attention to detail. Mate isn’t that concerned about the finer visual details or refined stories in her short pieces – they’re simply meant to document the images or short scenes that she was inspired by. Though her 3D animation is usually low resolution and somewhat crude in its appearance, the concepts and (usually disturbing) ideas are clearly conveyed. (An example of substance over refinement.) Some of her work is shared online at https://vimeo.com/claudiamate/videos  Unfortunately, I’m not able to locate online postings of the shorts she presented at the event.

My mixed feelings about the actual display of the videos…although there was certainly a dedicated viewing audience that attended, most spectators were very momentary with their attention as they passed by. Easily understandable in the busy environment – there was a lot of sonic and visual competition, as well as congested traffic intersections to navigate. (The roads were not closed for the event as they apparently were in a previous year…the logistics and expense for that turned out to be too much.) While the idea of re-contextualizing large public digital displays for artwork is admirable (especially to showcase works that might not easily find another public screening opportunity), I’m not sure that it does justice to the artworks themselves. A more encouraging circumstance was taking place in the main courtyard of the DPAC, where rows of chairs were available for a focused viewing experience on a smaller screen. A more traditional theater-like setting, granted, but the more controlled, comfortable environment assisted in allowing viewers to take in more details.

A great thing about video festivals…attendees are usually more willing to help out someone creating their own video. I got some very nice footage to work with that afternoon and evening for my first 4010 course project.

 

© 2020 Noises & Signals

Theme by Anders NorenUp ↑