Packer/Ansible: Unable to acquire dpkg lock

Today I ran into a rather odd issue attempting to patch a base image using Ansible and Packer. Randomly and sporadically, my playbook would fail with the error of:

Could not get lock /var/lib/dpkg/lock-frontend

If you ever try to run Ansible on Ubuntu 16.10 and later, be aware that Unattended Upgrades is enabled by default. On boot of new Packer bake instances, I noticed that it would sometimes lock apt to do security updates by default. I spent a good part of my day tracking this down as to why sometimes it would work and other times it would not; turns out it was a race condition (could apt finish faster then next step). Seeing as it’s 2018 and we should not be afraid of security fixes, I didn’t want to disable this because this is useful for security and such.

To get around this, I created a role called prerun, which does the following task:

# Check for unattended-upgrades
- name: Wait for automatic system updates to complete
  shell: while pgrep unattended; do sleep 10; done;

After including this in roles that used apt, my error went away. One of my builds took almost 30 seconds to get past this; which would have otherwise failed. Hope this helps another poor soul out there. 🙂



Another way as I was shown with Packer (to avoid adding additional Ansible roles) is to run a pre and post script in between your Ansible run! 

In your base Packer JSON provisioners section, you would do:

  "script": "scripts/",
  "type": "shell"
  "extra_arguments": [
  "playbook_dir": "playbooks",
  "playbook_file": "playbooks/example.yml",
  "type": "ansible-local"
  "script": "scripts/",
  "type": "shell"

In, you would need a section to do whatever you need (most likely install ansible and such):

# Ensure to disable u-u to prevent breaking later
sudo systemctl mask unattended-upgrades.service
sudo systemctl stop unattended-upgrades.service
# Ensure process is in fact off: echo "Ensuring unattended-upgrades are in fact disabled" while systemctl is-active --quiet unattended-upgrades.service; do sleep 1; done

Finally, in (and whatever else you need, like delete ansible dir and such):

sudo systemctl unmask unattended-upgrades.service
sudo systemctl start unattended-upgrades.service


Another solution could be to add this in you Ansible roles before package installation:

- name: Wait for /var/lib/dpkg/lock-frontend to be released   
shell: while lsof /var/lib/dpkg/lock-frontend ; do sleep 10; done;

Thanks Gordon Kirkland for the additional solution!

Hope one of these many solutions helps others stuck with the same problem!

Installing cx_Oracle on a Mac: 2017 Edition

Hello internet friends! I am back with a much needed follow up on my old cx_Oracle install guide on Mac from 2013. Thanks for everyone for the feedback over the years, and glad that I was able to help so many folks. After many comments (and a new MacBook), I have decided to polish up my guide. Luckily, it appears cx_Oracle is now on PyPI (no more compiling it yourself!) and the process is much much easier! Let’s get started!

What’s new from last time?

  • No more need for sudo access; leverage user space properly.
    • sudo/root should generally be reserved for system level operations or at least operations you are familiar with. Never trust random sudo commands you find, they can seriously mess up your system.
    • This was one of my biggest complaints about the old article
  • No more compiling cx_Oracle from SourceForge.
    • It is now in PyPI…thank you sweet developers!
  • It works in a virtualenv
    • No longer do you need to setup cx_Oracle at the system level! You can now use standard virtualenvs. Nothing is stopping you from installing at the system though!
  • It works in Python 3.
    • Time to move to Python 3 everyone. This guide is written for Python 3, although the steps for Python 2 shouldn’t be too different.
  • Less steps overall
    • Before there was a bunch of unzip library files and also symlinking random files. I don’t have time to check but it looks like the process has been improved significantly.

Necessary Downloads:

  1. Oracle 12.1 Instant Client 64-bit Libraries
    1. Instant Client Basic
    2. Instant Client SDK
  2. virtualenv – Optional but avoids polluting system pip environment by creating virtual python environments!

Next Steps:
Assuming you downloaded all the above files (except virtualenv), and they are found in your Downloads directory, run the following commands:

mkdir /Users/<username_here>/oracle
mv /Users/<username_here>/Downloads/instantclient-* /Users/<username_here>/oracle
cd /Users/<username_here>/oracle
cd instantclient_12_1/
ln -s libclntsh.dylib.12.1 libclntsh.dylib
vim ~/.bash_profile


Run the following lines (you should add these lines to your .bash_profile too):

export ORACLE_HOME=/Users/<username_here>/oracle/instantclient_12_1


Once you enter all the info, you should re-source your bash_profile. Once your changes are in your .bash_profile, the instant client should be setup! Validate all your settings are correct by doing:

. ~/.bash_profile
echo $ORACLE_HOME # should be /Users/username_here/oracle/instantclient_12_1 if you followed this guide
echo $LD_LIBRARY_PATH # same as above
which python # usually /usr/bin/python or your virtualenv


If all of the above is set, then you are ready to proceed with the cx_Oracle install!

. ~/.bash_profile
mkdir /Users/username_here/virtualenvs
cd /Users/username_here/virtualenvs
pip install virtualenv # If you haven't already
virtualenv --python=python3.6 venv # It's 2017, time for Python 3
. venv/bin/activate
pip install cx_Oracle


If all went according to plan, it should have successfully installed! (if not post in the comments and I can help!) To test simply:

>>> import cx_Oracle
>>> cx_Oracle.version
>>> conn = cx_Oracle.connect('pythonhol/welcome@')
>>> conn.version
>>> conn.close()


Hopefully this new and improved guide gets you folks up and running faster! As always, I look for your feedback below!

The Five Phases DevOps: A Retrospective on a Retrospective

A bit delayed news but I wanted to share some exciting news with everyone!

Recently, I accomplished one of my many personal goals; speaking at a local conference. On August 30th, I took to the stage at DevOps Days Chicago to talk about the Five Phases of DevOps! Since that time, I also gave my talk at Windy City DevCon (September 16th) sponsored by the local Google Developers Group!  I haven’t gotten a chance to publicly say this, but I want to give a huge thank you to all the conference organizers and speakers, my friends and family, and of course my team at gogo for helping me get prepared for this!

The talk was titled “The Five Phases of DevOps”, in which I took the audience on a journey as to how we incorporated DevOps at my current employer (gogo). This talk was a bit out of my comfort zone since it really wasn’t a technical talk; which is my day job. I didn’t show off any build tooling, I didn’t have Python code examples, and didn’t even “do the DevOps” to magically create a presentation.

I was a bit worried early on, however, I felt passionate about the amazing work we did in the past year so I felt I needed to do this talk; if we can do it, so can you. I haven’t gotten a chance to publicly say this, but I want to give a huge thank you to all the conference organizers and speakers, my friends and family, and of course my team at gogo for helping me get prepared for this!

I wanted to give a brief retrospective of how I prepared for my first conference talk. I hope this post has some tips to help out folks about to give a talk! Everything I am about to say is my personal opinion!

Lessons Learned:

When submitting a proposal, it’s better to have one solid topic than many general topics.

When trying to pick a topic, pick an idea that you feel passionate about and run with it. Early on I was tempted with submitting many small proposals but found myself struggling to generate enough content for the time needed. The final talk I submitted was actually stitched together from three separate topics! See if you have similarities across the topics you want to submit; you may be able to incorporate them into one!

A helpful practice would be to come up with bullet points for each topic you want to submit and see how it flows together. Flow is the most important thing when speaking to a large crowd. When taking your journey to Middle Earth, you want it to be as free flowing and smooth as possible; avoid trolls and goblins if at all possible.


Practice, practice, practice

This one is commonly stated but prepare your slide deck in advance and take the time to refine it! Once you feel you have a starter set of material, present it to a small audience and ask for honest and critical feedback. This is when you can enhance your content to deliver a solid talk for your real audience!

A helpful tip is to practice in front of both a technical and non-technical audience. By doing this, you can see if you are technically on point, but also you can see if you can hold the attention of someone who might not understand the topic as well.


Use text to supplement your talk; not drive it

A great piece of advice I was told by another speaker is that

“Your audience is there to hear you speak; not read your slides.”

Don’t load up your slides with a bunch of text and just read them. You are a speaker, not a reader! Have key points on your slides, and follow up with clear thoughts and explanations. Also remember to have a good balance of images and text.

Early on, the most common feedback that I received was that I was reading my slides too much. Remember, speaker notes are your friend! Put key points you want to mention in them for each slide.


Don’t have your text and images fight for attention

Try to have a solid balance of both in your slide deck. Too much text and your audience may get bored. Too many images and you may distract your audience.

Don’t have big blocks of text with many images. Realize that your audience will need to focus on three things at the same time; your text, your images, and you. Use your content to supplement you!

It may be helpful to use large image slides to talk to the audience and explain my points. Use text slides to deliver takeaway quotes or supplement what you are talking about.


Tie it back to real world examples

When trying to explain something, it is really helpful to show real use cases or examples to help drive the concept home. You probably have already seen this done in many technical talks. Traditionally this is done by showing code snippets, technical implementations, or service level examples.

As I mentioned this was my first non-technical presentation. I couldn’t just toss random code examples or show off a live demo. I found that my best ally was to leverage facts and metrics. Try to explain your message in an easily relatable way; pictures, stats, stories. (Everyone loves story time)

The goal is to show that by doing X you can see result Y, which means Z. Since my talk was about a year’s worth of progress, I found a timeline was the best thing to use. I took each phase of DevOps, related to a quarter of the year, and explained technical details there.


If you have live demos, record a video

When giving a tech talk, we all want to do a live demo in front of an audience. There is a rush of excitement we all get showing off technology just working. With that being said, realize the odds are forever not in your favor when it comes to live demos at a conference. Shitty WiFi? Displays not working right? Typos while being nervous from speaking?

I always suggest to always have a backup video recorded just in case something happens. You can try your live demo, and the second you see oddities flip to the video. No one wants to hear you justify issues, fiddle settings, or fix bugs/typos; they happen. Whether it is your fault or not, no one will care. Remember the goal is to supplement your talk and key points with a live demo, not distract from it!


Understand the logistics! Ask Questions!

Some things you should know in advance:

  • Is it a lightning talk or a full-fledged talk?
    • You probably already know this from your application but it never hurts to confirm. Also knowing when you are scheduled to talk early helps as well.
  • How much time do you have? Does that include questions?
    • I totally messed this one up and assumed my 30-minute slot had additional time for questions. Let’s just say I had to do some slide surgery the night before to accommodate this. Don’t let this happen to you.
  • What display and power connectors will be provided?
    • Make sure you can present all that great material you have. Bring your own adapters if needed as well. If you want to set up ahead of schedule, ask your event organizers!
  • What display resolution and how many screens will you have?
    • Design your slides for what you will be presenting on! A 70+ inch screen combined with 12pt font might lead to a bad time for folks as they digest your content; especially the crowd out in the back! A tip is to make sure you can easily read your material from at least 7 feet away when writing them!
Take it slow, breathe, and have fun!

Realize that no matter how much you practice, you may/will trip up. If you feel yourself getting overwhelmed….STOP! Breathe, drink some water, focus, and compose yourself. It is not the end of the world and a small flub here and it is normal!

A tip shared to me from another speaker was that as you progress with your talk, you may enter a zen-like zone. You will stop worrying about the crowd, you will find your words flowing naturally, and ultimately you might actually say things you don’t remember or didn’t practice. I found that all my key points and audience takeaways were delivered best during these phases. Relax, have fun, and go with the flow!

Review yourself and share your work!

So your talk is over, it’s time to share what you did! Self-promotion is one of the best things you can do, so make sure to include your relevant contact information at the beginning and the end of the slides. By including them at the end, you don’t need to awkwardly shift around to show people your contact info! When you are all done, make sure you share your slide deck on the interwebs, and also write a blog post about it (hopefully much faster than I did!).

If you have a chance, record yourself or have someone record you during your talk. Watch the video or listen to a recording of the talk as it was delivered.
Make notes and determine where you felt you needed more material, or where you need to practice more. Just because you gave this talk once doesn’t mean you won’t be able to give it again!


Well, that is a wrap for now. I have way more tips to share, but I also can’t type forever. If you need any advice or want to talk about my presentation feel free to reach out to me via Twitter or e-mail! I am planning on writing another article about the phases of DevOps with hopefully a bit more context and examples; look forward to that!

Below you will find the recording of my talk and also the slide deck for your viewing pleasure!




AWS Summit: GameDay Experience

Today, I participated in a local AWS Summit’s free GameDay event here in Chicago. Going into this, I was not sure what I was getting myself into. The details of the event sounded intriguing; split up and compete to see who can create the best scalable solution.

The event starts by splitting you up into groups of 3-4 individuals. It pairs beginners with advanced AWS users to not only help balance out the room but also teach at the same time. I thought that was a really cool and neat concept; learn from one another. Walking into the room, I voiced a confident “advanced” but leaving it left me hungry for more and full of questions. It was well worth my time and it was loads of fun.

First, the CEO of Unicorn Rentals walks in and gives the most “inspiring” talk that basically can be summed up to: “I told folks on Good Morning America that we are live, good luck, don’t mess up, and make me a lot of money.” Fortunately for us, we were given a comical half-assed “Runbook” of the architecture and how it works; sadly this is probably more documentation than some projects I have worked on in my professional career! From coffee stains to passwords scribbled all over the place, I knew from the get go that this was going to be a fun little ride.

The "Runbook"

First thing we did from logging in was delete all those scratched out accounts, and change the password on the root account! Once we secured the account, we needed to register our team on the scoreboard. This was done by creating a root TXT record in the hosted zone with our team name. It appears the organizers listened for the TXT records and registered teams off that; kinda cool! From here, the fun begins and our application begins taking load.

When we were handed the account, we were told that it is working today, but we need to make it better to handle increased load in ~30 minute increments. First thing we checked was to see how our application was configured. We noticed that the root A record was pointing to an individual EC2 instance with a static IP and not to a ELB/ASG. We corrected this issue by creating an ALIAS record to the ELB for the root record and associated the ELB with the ASG. Next, we were told to create a simple deployment architecture using only User Data and Auto Scaling Groups. We started by copying the existing broken launch configuration, fixed it so it got the new code base and also didn’t have a shutdown command at the end (WHY!!!), and then we proceeded to launch with a scale of 2 instances (hard-coded for now). Keep in mind this was only 30 minutes into the competition.

From here, we faced various issues such as:

  • The “Network Engineer” messed up our ACLs with a bad script
  • The same “Network Engineer” killed our main route table with no default Internet Gateway.
  • You were met with other nefarious and “accidental” issues along the way and it was great to simulate the Oops moments.

Aside from these moments, we began to notice that our subnet for launching instances was limited to only a /28 CIDR block! Load by this point was starting to jump to almost double digits; This just wouldn’t do! We fixed this by creating three subnets (one in each AZ) and associated them with the ELB, and autoscaling groups. With ample space, we can now focus on improving our scaling policy. At first we scaled based on network requests, but later on we determined we should have used the ELB Alarms on latency and scaled off of this. With a solid policy, enough servers to handle traffic, we figured we are good to go! Wrong, so wrong, I can’t believe how wrong we were.

Despite having everything in order, our servers still couldn’t keep up with load! What was going on? There was enough servers, they weren’t crashing, no high load. What could be the problem? We were stuck on this for sometime until we logged into the instance and noticed that the application handled one request with an average latency of about 4 seconds. Playing with the binary, we noticed there was an Elasticache (memcached) option we could leverage. We sprang to the occasion and built up a small distributed memcache cluster and configured our app to use it. Now our average request time was ~.2 seconds! Sweet!

However, despite having fast response times, we were still getting failures. I didn’t make any sense, until we ran the binary and watched it. It appeared that it would only handle one connection at a time, and then reject any other connections as it was handling that one request. This ended up taking the most time, but we couldn’t implement a solution in time. Speaking with some of the Solutions Architects, they mentioned some teams used Docker to run multiple on the same host, and some folks found out that you can run about ~4 binaries concurrently on different ports and handle the load. While the competition winded down, I was implementing a solution that involved running 4 instances of the binary on the same host leveraging IpTables to do round robin; unfortunately, I didn’t get a chance to see it in practice but I was confident this solution would have worked.

Change Management...HAH

All in all, it was a great experience. No my team did not win, but I got to explain and teach others about AWS and some best practices. Towards the end it was really cool to see a team effort as we started tackling harder and harder challenges, and overall I think we all walked away with a little bit more knowledge than we came in. At one point, I had 10 people sitting around me listening to me explain how we do deployments at gogo, our challenges and pitfalls with AWS, and other random topics. Even some of the Solutions Architects came by to sit and discuss, and joked “I needed stadium seating” with the amount of people around me! At that moment it made me realize we do some pretty wicked stuff at work. If anything, this re-energized me to get back and implement some of the things I learned at work!

With all this being said, I really got to thank the AWS Summit GameDay organizers! This is great, and loads of fun. Please keep doing these kinds of events; especially in other cities!

Moving my Blog

I am in the process of failing over my website from GoDaddy to AWS. Trying to tune WordPress to run a bit more optimally using RDS, proper caching, etc. The plan is to eventually migrate away from WordPress to my own Django Blog Engine but that is still in development.

You might see some bumpy things happening, but I hope for this to be relatively painless for all of you!

EDIT: And we are back online! This page is smoking fast now and SEO optimized! Finally broke away from GoDaddy! Right now, I am running in a VPC in AWS using RDS! That’s a 2015 goal complete!

Secret Santa – Python Flask Webapp

Hey all, sorry for the past quiet months. It has been quite hectic with my company moving downtown, changing positions, and ultimately trying to crank out and learn so really cool and exciting technology. As every with every winter, I expect to have a bit more downtime and hopefully I can share some cool things I have been working on in my spare time!

Today, I am excited to share with all of you a fun, but very basic (and I mean verrry basic) web app I wrote while trying to digest an exorbitant amount of Thanksgiving dinner. I was tired after Thanksgiving dinner and want to learn something new as well! Some backstory; every year, my family and I get together for Thanksgiving dinner and we do a yearly “Secret Santa” pool. With this being said, this is usually quite problematic as:

  1. Sometimes people draw their own name, thus making it public who is left in the pool
  2. Someone isn’t able to make it to dinner due to other arrangements so we need to draw the name and just tell them (but that is no fun…)
  3. Someone spills the beans on who they got thus potentially breaking the secret chain of who got who

In comes my idea; how about we do secret santa online this year? It sounded like a great idea, I would whip up a simple app that collect names and email addresses and shoots random emails out using Amazon SES. This would have worked if it wasn’t the fact that not everyone at dinner had an email address (primarily my grandparents). So I had to come up with a solution that solves all the three above existing problems, but also allowed folks without email to sign up as well.

With the goal set, I created a simple flask app that allows people to register their name and a unique passphrase. Once registered, users can check their status to see who they have been matched with. When it has been determined that all people have registered, the admin (me in this case) would go to a randomizer url and scramble all the people’s matches. Until the randomizer piece happens, users are told to check back later or poke the admin with a stick to kick off the match pairing. In my situation, I told everyone to register by noon Friday, I would do a shuffle at that time. Once the randomizer was complete, people can then check their pairing.

So after about 2 hours, I came up with the solution, implemented it, and had people up and running on AWS. It worked surprisingly well and even got a chance to show my younger cousin how it works! Best of of all, I have made the code open for you all to use and implement. I added a lot of “bootstrapping” files such as apache configs, sqlite3 setups, and even a sample WSGI file. Below you will find a list of technologies I used, as well as a number of improvements that could be done.

Hope this helps someone else out there as well! Happy Holidays all!

Technologies Used:

  • Python
  • Flask
  • Sqlite3
  • AWS (for EC2 primarily)

Potential Improvements:

  • Admin Interface
  • Secure Passwords
  • Allow for multiple pools
  • SQLAlchemy
  • Various Flask and Pythonic improvements


March Madness Payout Calculator

So it’s that time of year again, March Madness has come around! This means people spending countless hours researching teams, creating brackets, joining pools, etc. At work every year we start a pool for the scores of each round. Basically, you get assigned a random winning number and losing number. You look at the final box score of a given game, and if they match your assignment you win X amount of dollars based on the round! Simple right?

This year, I found myself busier than normal (basically I have other things to do than watch College Basketball all day) so I decided to write a quick app to determine how much I will win. I am leveraging a unOfficial NCAA API so the data might not be 100% accurate, but it will be good enough for my purposes! 🙂

Feel free to check out the source code below so you can use it too. Note: you might need to tweak payouts, and your magic numbers to your specs!


Download an SSL Certificate using a one-liner!

Had a task today to sync up a SSL certificate, only problem is I didn’t have the certificate nor could I find it anywhere. I knew I could use openssl and get a print out, but I wanted a actual file in my tmp directory. So obvious start with the basics…

openssl s_client -connect HOST:PORTNUMBER

Simple, straight forward…next how to I pipe this to a file. My gut told me:

openssl s_client -connect HOST:PORTNUMBER > /tmp/out.cert

Well…That worked…sort of. I want to be able to terminate this so I don’t have to hit ctrl+c. Well what if I pipe it to echo…

echo | openssl s_client -connect HOST:PORTNUMBER

Bingo! That worked marvelously! But wait…drats…extra stuff about the cert chain. Googled around and found someone already did the heavy lifting using sed!

echo | openssl s_client -connect HOST:PORTNUMBER | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > /tmp/websitessl.cert

Awesome! Now I have a clean importable cert. That will save the certificate to /tmp/websitessl.cert.

Thanks Sean and Vinay from Serverfault. They actually had some extra tips I picked up too!

1) You can use -showcerts if you want to download all the certificates in the chain. But if you just want to download the server certificate, there is no need to specify -showcerts

2) echo -n gives a response to the server, so that the connection is released

So TLDR is, just use:

echo -n | openssl s_client -connect HOST:PORTNUMBER | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > /tmp/websitessl.cert


No wget, No Problem!

I don’t post enough, so I am going to try to make some day to day blog posts here and there.

So today, I had an interesting experience with some hardware at work running a custom Linux OS. It was pretty locked down, and there is no simple “yum/apt-get install.” To make matters worse, most “core” packages were custom and didn’t exist. Some of these missing packages were wget, rsync, or curl.

I was tasked with putting a rather large file on this system, and was told the best way was to take a USB stick and plug it in. See…I am lazy (it was also -25 degrees outside to get to Building B). This means I needed to waste my time walking in the tundra I call Chiberia, when I could have been hammering away at something else.

First thing I do, login the box. Alright…so I have remote access. Let’s see, I can resolve internal addresses. Typical. Hmm…what about other addresses? Hmm interesting, I can reach them as well. Telnet to 80? Cool, that works too! wget failed…as expected…meh, worth a shot. Hmm, this is custom Linux, I mean…they really shouldn’t touch python. I wonder…BINGO! I got a python shell! In the shell, I ran the below script to essentially “wget” a file onto the box. Took a while, but it worked.

import urllib
urllib.urlretrieve("", "bigoldfile.tgz")

Sure this works, but again…i’m lazy; this is a bit of typing anyways. I want something scriptable that I can just call with a url. Behold, my I know, nothing fancy, but it works. Just call, python <some_url>. Enjoy it for what it’s worth.

import urllib
import sys

url = str(sys.argv[1])
file_name = url.split("/")[-1]
urllib.urlretrieve (url, file_name)

New Year, New You! New Portfolio Design!

Before I start, happy holidays and have a safe and awesome holiday season! Looking forward to all the adventures 2015 brings!

So I know I haven’t updated this page in a while, but I figured what better time than now! 2014 is coming to an end, and 2015 is coming fast our way. For now, enjoy this new template “Twenty Fifteen!” It is a stock theme that comes with WordPress. By switching I not only afforded myself a nice looking website but also a responsive look and feel that looks great on mobile.  Looks very sharp, and required only some minor changes to the core PHP theme files and CSS to my liking, such as:

# echo Joel loves using these to simulate a terminal!
# Joel loves using these to simulate a terminal!

Aside from a crazy year @ gogo with a lot of new and exciting projects, all has been well! Still learning, still automating, still Joel! Still sprucing up the page, along with my update Resume, and projects I can share. Check back soon!

In terms of a short article, how many of you have seen “The Interview”? The movie was first being pulled by movie theaters in fear of being compromised, next Sony is forced to pull the movie, Obama criticizes Sony for pulling the movie, and Sony ends up releasing the movie to theaters that want to play it! Crazy ride! Aside from all the political hoopla, the movie is very crude yet very entertaining. Don’t expect a less than heavy hitting comedy, and sometimes it even goes over the top.

Though the entire banning/unbanning story is not as important (in my eyes) to what Sony did next. Sony also made the movie available by means of streaming straight from your home! That’s right! You can rent the movie just like any other movie from Google Play, Xbox Video, and other providers! I feel that this is going to quickly become a trend if this movie does well. As a movie company, why would I lose potential profits to a movie theater (who can restrict, refuse to show, or dictate when and how long my movie is available for) when I can go straight to the consumer? Heck! I would suggest they decrease the rental period from a 24 hour window to a 6 hour window!

Let’s face it, the movie theater is a great experience, but the fact that a movie ticket alone would cost me nearly the price of the movie at retail is a bit absurd. The movies biggest draw used to be the fact that people had tiny televisions with less than optimal quality. Today, we have 7.1 surround sounds in our homes with 4K resolution television sets that can even do 3D! I don’t feel that movie theaters should cease to exist, but rather movie studios should work to making BOTH options available to the consumer market. Well, enough ranting! What are your thoughts? I for one am for it, how about you all?