Categories
Computers Family How To Technology

Late Night Repair

Light-O-Rama CTB16PC 16 Channel Lighting Controller

In helping my son prepare and set up his Christmas light show this year we discovered half the channels on this controller stopped working. This is our second Light-O-Rama CTB16PC controller, both were purchased as a kit.

Just in case another Light-O-Rama ower runs into this page here’s the troubleshooting we performed and the ultimate solution that fixed our issue. (Spoiler: Assembly Mistake!)

Symptom: Channels 1-8 Not Working. The controller connects to the network fine, 9-16 working 100%.

Troubleshooting: First thought, blown fuse. We were setting things up in the rain and assumed it was related to a short.

Fuse tested fine with a multimeter, swapped the two fuses in the unit, again fuses working fine.

Used the multimeter to confirm 120v making it through fuse hold and tested each “hot” channel output. Things normal, although no voltage on channel 1-8 hots, as expected.

After reviewing the Light-O-Rama forums I decided to take the controller completely apart and check for any bad solder joints. Of course, I made this decision at 11 PM. I was really trying to avoid this, such a time-consuming process.

Success! After a few minutes of careful inspection, I found two pins on an IC that I plain missed completely during assembly, no solder! I carefully applied solder, checked a few other spots and reassembled.

BINGO! Problem solved.

U4 IC 20 PIN (74ACT273)

Categories
Community Television How To Reviews Technology Television Trade News

Backing Up Large Video File Collections to AWS Glacier Deep Archive

Part 1 – Why AWS Glacier Deep Archive?

Backing up large collections of raw video footage and edit masters remains a real challenge for anyone working in the video production world. As the Executive Director of a local community media station, I’m responsible for maintaining a Synology NAS which currently holds 55TB of final edit master videos. The idea of incorporating “Cloud Storage” into our backup procedures has always interested me, but the expense has held me back.

Until recently, our backup was rudimentary. We utilized Archive.org, which is a wonderful organization operating an online digital library of print, audio, and video works. They allow users to upload high-quality MPG2 files to be added to the collection and unlike YouTube, they offer the ability to download your original upload at any time.

For us, this was a win-win. Archive.org provided a FREE way for us to share our content, preserve it for the future, and have an offsite backup if we ever needed it.

We love the Archive and will continue to support them. The main goal of keeping our content open and available to the public for decades to come is just amazing.

That said, the Archive is NOT a backup solution, but given our budget constraints, it was a quasi backup. We frequently said, “If the Synology NAS went up in flames, at least the videos would not be lost forever”…but the recovery process might take that long.

When I learned about Amazon’s Glacier Deep Archive service earlier this year I was instantly intrigued and thought this might finally be a perfect solution for our needs. At “$1 per TERABYTE per month” they certainly had my attention.

Glacier Deep Archive is a new product offering by Amazon Web Services (AWS) that falls under their S3 Storage product line. Deep Archive is the lowest cost storage class. When Amazon released the new storage class on March 27, 2019 their press release highlighted several use cases including media and entertainment companies:

“there are organizations, such as media and entertainment companies, that want to keep a backup copy of core intellectual property. These datasets are often very large, consisting of multiple petabytes, and yet typically only a small percentage of this data is ever accessed—once or twice a year at most. To retain data long-term, many organizations turn to on-premises magnetic tape libraries or offsite tape archival services. However, maintaining this tape infrastructure is difficult and time-consuming; tapes degrade if not properly stored and require multiple copies, frequent validation, and periodic refreshes to maintain data durability.”

Amazon Web Services Press Release, March 2019

There is some “fine print” to be aware of, although none of it’s a real concern for me. There are additional charges for recovering your data and the data is not instantly available, retrieval time is up to 12hrs, but that’s the trade-off for the low cost. Again, not a big deal for my use case. You can check out the Amazon S3 Website for more specifics. The whole idea of Glacier Deep Archive is LONG TERM storage, files that you need to keep and don’t want to lose but may never actually need to access if your local files remain intact.

For media professionals, I see Glacier Deep Archive as a great tool for:

  • Wedding and Event Videographers who want to backup raw footage, master files, and other assets.
  • Community Media Stations (PEG Access) looking to backup programs and raw footage.
  • Local Production Companies, again for all the same reasons.

Before I share my workflow and experiences with AWS Glacier Deep Archive let’s jump back for a second and talk about backup best practices for just a minute.

3-2-1 Backup

Peter Krogh’s 3-2-1 Backup Strategy is a well-known best practice adopted by IT professionals and the government. The 3-2-1 concept is rather simple:

3. Keep at least three copies of your data
The original copy and two backups.

2. Keep the backed-up data on two different storage types
Multiple copies prevent you from losing the only copy of your data. Multiple locations ensure that there is no single point of failure and your data is safe from disasters such as fires and floods.

1. Keep at least one copy of the data offsite
Even if you have two copies on two separate storage types but both are stored onsite, a local disaster could wipe out both of them. Keep a third copy in an offsite location, like the cloud.

With the 3-2-1 Backup goals in mind, I’d like to share my experiences with AWS Deep Archive in Part 2 of this blog post. I’ll share the workflow I’ve established after running into some issues initially.

Keep in mind, I’m new to the AWS platform and I’m a media professional, not an IT genius. I am a tech geek and enjoy the challenge of learning new things. If you have any feedback, tips, or suggestions please feel free to post in the comments.

Part 2 – Backup to AWS Glacier Deep Archive using CLI
Coming Soon

Categories
Computers How To

Upgrade MediaWiki, 5 Easy Steps Using FTP and Web Browser

MediaWiki Easy UpgradeI run a relatively small wiki using MediaWiki hosted on a Dreamhost server. Generally, I like to stay on top of upgrades for the security patches, new features, and bug fixes but the manual upgrade process makes me nervous.

I’ve been burned from nightmare Drupal CMS upgrades in the past. Losing hours of my life to troubleshooting and rebuilding is not something I ever want to repeat.

Anyway, I’ve procrastinated for a while now on this MediaWiki upgrade. I finally decided to jump in this week and get it done. I thought I would share the process I used, which is not exactly the recommended process on the MediaWiki website, but I found it easy and straight forward.  I successfully avoided the command line, which makes me somewhat nervous and certainly not in my comfort zone.