Life as Clay

Paperclip, S3 & Delayed Job in Rails on Heroku – Jan 2012

with 2 comments


Edit: I forgot to mention here that with Paperclip 2.4.5 you have to use the ‘paperclip-aws’ gem in order for Paperclip to work with Amazon’s newer ‘aws-sdk’ gem. That is no longer true with Paperclip 2.5.

I followed this good tutorial on how to push paperclip image processing to the background with delayed_job. My Rails app is deployed to Heroku on the cedar stack. I’ve had problems with the NoMethod errors using delayed_job 3.0.0, so I downgraded to 2.1.4. Also using paperclip version 2.4.5. In the end, I found that I could ditch the struct presented in the aforementioned tutorial and just call handle_asynchronously¬†on my reprocessing method. This is what the codes looks like:


class Image < ActiveRecord::Base
  attr_accessible :profile_pic,
                  :caption,
                  :note,
                  :pic_file_name,
                  :pic_content_type,
                  :pic_file_size,
                  :pic_updated_at,
                  :pic,
                  :pic_attributes,
                  :processing
  
  belongs_to :imageable, :polymorphic => true
  
  # Added for paperclip-aws
  def self.s3_config
      @@s3_config ||= YAML.load(ERB.new(File.read("#{Rails.root}/config/s3.yml")).result)[Rails.env]    
  end
  
  has_attached_file :pic,
                    :styles => { 
                      :large => "500x500>",
                      :thumb => "100x100>", 
                      :tiny => "50>x50>", 
                      :smallest => "24>x24>" },   
                      :default_url => '/:attachment/:style/missing.png',               
                      
                      # Added for paperclip-aws
                      :storage => :aws,
                      :s3_permissions => :authenticated_read,
                      :path => "images/:id/:style/:filename",
                      :s3_credentials => {
                        :access_key_id => self.s3_config['access_key_id'],
                        :secret_access_key => self.s3_config['secret_access_key']
                      },
                      :bucket => self.s3_config['bucket'],
                      :s3_protocol => "https"    
                      
  validates_attachment_content_type :pic, :content_type => [ /^image\/(?:jpeg|gif|png)$/, nil ]
  
  # How to implement on Heroku with processing in the background
  # http://madeofcode.com/posts/42-paperclip-s3-delayed-job-in-rails
  
  # cancel post-processing now, and set flag...
     before_pic_post_process do |image|
       if !image.processing && image.pic_changed?
         image.processing = true
         false # halts processing
       end
     end
     
     # call method from after_save that will be processed in the background
     after_save do |image| 
       if image.processing
         processImageJob(image)
       end
     end
     
     def processImageJob(image)
         image.regenerate_styles!
     end
     handle_asynchronously :processImageJob
     
     # generate styles (downloads original first)
     def regenerate_styles!
       self.pic.reprocess! 
       self.processing = false   
       self.save(:validations => false)
     end
     
     # detect if our source file has changed
     def pic_changed?
       self.pic_file_size_changed? || 
       self.pic_file_name_changed? ||
       self.pic_content_type_changed? || 
       self.pic_updated_at_changed?
     end
                      
end

Advertisements

Written by Clay

January 12, 2012 at 14:29

Posted in Code, Rails, Ruby

Tagged with , , , , ,

2 Responses

Subscribe to comments with RSS.

  1. This is slightly off topic, but I am wondering how you would synch your database backup with an S3 backet backup. What about files that are uploaded/deleted between the time the db backup starts/stops and the S3 backet dupe script starts/stops. This is, in general, an issue backing up models with attachments, but seems like an even bigger one with Heroku and S3.

    Jeff Doyle

    January 25, 2012 at 12:33

  2. That’s a good question. I’m currently using taps to pull my db to another machine and don’t have the backup automated. It’s just a hobby app, however, so it’s no big deal if I lose the data.

    For high-volume applications, where it’s actually a concern that you would have the situation you described, I suspect that you could use one of Heroku’s add-ons to automate backups with real-time database mirroring. In fact, you may be able to do that pretty easily with Amazon’s service, depending on the database you’re using.

    Alternatively, you could use Heroku’s free Amazon RDS add on (http://addons.heroku.com/amazon_rds) and not host your database with Heroku. Amazon has redundant mirroring by default.

    Clay

    January 25, 2012 at 12:43


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: