Following the status of your projects in Signal from CCMenu

After announcing Signal, one of the firsts feedbacks I received was from Raphael, asking if I was planning to add a feed compatible with CCMenu.

I found the functionality very interesting and today, in order to integrate Signal with CCMenu (OS X) or CCTray (Windows), you just need to point to the URL http://host/projects/status.xml and select the desired project.

I recorded a short video (01:13 minutes) demonstrating how to configure the tool and how it works, hope you like it:

If you liked Signal, please consider to recommend at Working With Rails.

E99DUHF5W855

Really easy continuous integration with Signal

Signal is a continuous integration server written in Rails which I think takes the best features of similar systems and merges them in a very simple to use application.

When I started working with Rails, the first continuous integration server I tried was CruiseControl.rb. The CC.rb was also written in Rails and what bothers me the most about it is that to add a project you have to connect to the server and execute:

./cruise add [project-name] -r [repository] -s [svn|git|hg|bzr]

After running the previous command you can open a YAML file and configure the recipient of the emails that are sent by the application when the build breaks. In the scaffolding land, this makes no sense to me. It’s much more simpler to have a page where any user can register a project without much difficulty, like Hudson, Integrity and Signal does.

Another thing that bothers me about CC.rb is that it creates a lot of cruise processes and sometimes they hang. For a while I thought the problem was only happening with me but after talking with some people I found it wasn’t so uncommon.

The third thing I’ll complain about CC.rb is that at the home page it shows the last five builds of each project. IMHO the last two builds is already too much information. To know that the old builds were broken or not doesn’t mean anything to me. The only thing I care about is to know if the project is in a good state or not, information that Integrity and Signal display very well.

Integrity does a lot of things in a cool way, but I don’t like the fact that to install it you have to download the gem and then execute a command. The worst thing is that depending of the server you want to use, the command takes different parameters. Before explaining why this installation method is a problem, let me confess that another thing I don’t like about Integrity is that it doesn’t come with email support. Besides this lack of support be ugly, to include it you have to download another gem and modify a file. It looks simple but didn’t work for me, apparently because the version of the plugin gem was not compatible with the version of the application gem. Now, how do you change the version of an application that was installed by executing a command from a gem? Would it only require to download the second version of the gem and delete the first? Would it require to reinstall the application? I don’t know, maybe I did something stupid but nothing worked for me and this is why I believe that this way of dealing with applications is a problem.

Signal tries to take advantage of Git. To install Signal you can run:

git clone git://github.com/dcrec1/signal.git
cd signal
rake inploy:local:setup

If you ever want to go back to an specific version you just can git reset –hard COMMIT; simple, and it can take advantage of history by knowing where we are and what we are undoing.

One characteristic CC.rb has and I miss in Integrity is to show the date of each build on the home page. Signal shows how long ago each project was built, so we can easily know when was the last build and if they are being created.

report_card is a project that integrates metric_fu with Integrity, but it’s necessary to install another application and run a few commands, what is very complicated to me. It’s for this reason I decided that Signal has native integration with metric_fu, RSpec and Cucumber.

Signal’s integration with RSpec, Cucumber and metric_fu is very simple. In each project page there are three links: specs, features, and metrics, which point to ROOT/docs/specs.html, ROOT/docs/features.html and ROOT/tmp/metric_fu/output/index.html, metric_fu’s default path. This means that if we are generating HTML specifications, they can be accessed from the project page.

Another integration that Signal has is with Inploy. If we want to deploy an application, we can do it from the project page by clicking on deploy, the rake task inploy:remote:update will be executed.

Hudson is an very good integration server, but I think Signal overcomes it by being a little more simpler to use and by being developed in Rails and being hosted on GitHub.

Being developed in Rails is an advantage because this type of projects usually have several plugins and Rails has native support to plugins. Almost every railer knows how to install a plugin and how do they work, so it will not be a problem to create them.

Being on GitHub is an advantage because it facilitates further collaboration with the project. As today, Signal has a lot of conventions, but if for some reason anyone need some special configuration, he can fork the project and start contributing. Since Signal was developed with Rails and it’s pretty simple, people will not have difficulties to understand how it works.

One of the conventions of Signal is that each build is created by running the rake task build, which depending of the project I have something like this:

task: build => ['db:migrate', :spec, :cucumber, 'metrics:all']

I recorded a short video of nearly three minutes demonstrating how easy it is to install and use Signal. In the video I download the application and install it using Inploy. Then I register a new project and create a new build. The build is not automatically created along with the project because sometimes we need to perform some actions before running it. Just for demonstration purposes, I chose a project in Ruby that has a very fast build, given it does not run the Cucumber features neither the metrics. After the first build, I do start delayed_job and create a build from the command line, like being in a Git hook. The video ends with me showing a build of the project Signal and how easily it is to view the specs, the stories and metrics from the project page.

Really easy integration with continuous Signal from Diego Carrion on Vimeo.

If you liked Signal, please consider to recommend at Working With Rails.

Rails deployment made easy with Inploy

After working in some systems made in Rails, Capistrano became one of the things that began to bother me more and more. I tried to find other alternatives, including Vlad, but none satisfied me, so I decided to create a solution and called it Inploy.

Before Capistrano fans crucify me, I wanted to let it clear that I don’t think Capistrano is a bad tool, just that it’s bad for my needs, which are much simpler than Capistrano can attend.

One of the things I don’t like about Capistrano is that it creates a lot of folders, one for each release. Some people argued with me that they are useful when you want to rollback a release, but that never happened to me, as never happened with most of the people I talked with. The rest of the people told me that in some circumstance they had to rollback a deploy, but the circumstance was that the deployed code broke the build.

Inploy uses Git, so in case we need to rollback our deploy to a version, we can use git reset. Of course this solution is more limited that the one that Capistrano uses, but it’s a solution to a problem that probably should never occur, a problem caused probably by bad practices in the development.

As a consequence of Capistrano strategies, they deploys are a little slow for my taste. I know there are some configurations that can speed up the process, but to configure is a thing I don’t want to do. I want to use a tool that works the best way by default and in most cases, the best way to a fast deploy is the simplest one: git clone in the setup and git pull in the update, the same that Inploy does.

Speaking about defaults, it’s very annoying to me having to define in Capistrano how to restart the server, how to clean the cache and what tasks it should rake, like more:parse and asset:packager:build_all. Inploy tries to solve this problems by executing common tasks by default, being smart enough to identify what tasks it should rake and in which order. The idea behind the plugin is that you shouldn’t worry about how to deploy that new tool you’re using, that things just work without the possibility of something going wrong.

Another advantage from Inploy over Capistrano for most cases is that it just have one way to deploy and its called update. This task updates the code, migrates the database, does some other tasks and restarts the server. It makes not sense to me to update the code without running migrations. In all teams I worked with, everyone could deploy the application and in most cases someone executed cap deploy instead of cap deploy:migrations. I know this is a human error but I prefer to use a tool that doesn’t let people make mistakes.

A need I had with deployment tools is that it could work both remotely and locally. I worked in a project that was delivered to a foreign client and the people responsible of updating the application in each release couldn’t do it otherwise that by connecting to a server and running a script. All of them used Windows and were used to work with Java systems that required them to access a machine, replace a war file and execute some commands. In that opportunity, the solution was to create a deploy.sh file in the root folder on the application. This file was executed by somebody from the client every time they needed to update the application.

The problem mentioned above doesn’t happen with Inploy, as it can be run both remotely or locally. When running remotely, the only only thing it does is to connect to a list of servers and run the tasks like being locally logged. It’s because of this that Inploy is delivered as a plugin, so it’s available in all environments.

As today, Inploy will only work for you if you’re working with Git and Passenger.

To install Inploy you can execute:

script/plugin install git://github.com/dcrec1/inploy.git

On installation, Inploy will write to a file in config/deploy.rb. This file is read by the plugin and it should be something like this:

deploy.application = "inploy"
deploy.repository = 'git://github.com/dcrec1/inploy.git'
deploy.user = 'dcrec1'
deploy.hosts = ['hooters', 'geni']
deploy.path = '/var/local/apps'

After that, we’re ready to execute the tasks that the plugin has:

rake inploy:local:setup
rake inploy:local:update
rake inploy:remote:setup
rake inploy:remote:update

I created a short video in which I demonstrate how to use the different tasks of the plugin. In the video I’m inside the Signal project, which already has Inploy installed and I remove it along with the configuration file to demonstrate how easily it is to install and configure it. After that I execute a remote setup and then a remote update. We can see in the video that the commands that Inploy runs are logged with the corresponding outputs and that config/*.sample files are renamed to config/*, another feature of Inploy. In the second part of the video a local update is ran and then deleted the repository, cloned and executed another setup, this time locally.

Deploy easily to Locaweb with Inploy from Diego Carrion on Vimeo.

More details about the plugin can be obtained in the official Inploy repository, where I’ll try to keep the README up-to-date. I hope it’s clear the way that Inploy works, but just in case, looking at the code is an option to eliminate doubts. The code is quite small (50 LoC moreless) and also self explanatory. Follows an snippet:

def local_setup
  copy_sample_files
  create_folders 'tmp/pids', 'db'
  run "./init.sh" if File.exists?("init.sh")
  after_update_code
end
 
def remote_update
  remote_run "cd #{application_path} && rake inploy:local:update"
end
 
def local_update
  run "git pull origin master"
  after_update_code
end
 
def after_update_code
  install_gems
  migrate_database
  run "rm -R -f public/cache"
  rake_if_included "more:parse"
  rake_if_included "asset:packager:build_all"
  run "touch tmp/restart.txt"
end

I wish this simple code motivates people to contribute with the project and make it smarter.

If this plugins helps you, please consider to recommend me at Working With Rails.

Correcting a Git corruption problem

Sometimes when we work with submodules the wrong way we end corrupting the repository and after trying to pull the server we get:

error: git-upload-pack: git-pack-objects died with error.
fatal: git-upload-pack: aborting due to possible repository corruption on the remote side.
remote: Generating pack…
remote: Done counting 39 objects.
remote: Result has 28 objects.
remote: error: unable to find b490fa1a6d81e39d3a8d99f9cce8b57c3397b7d7
remote: fatal: unable toremote: aborting due to possible repository corruption on the remote side.
fatal: protocol error: bad pack header

I’m not sure if this is the best solution but pulling from the public URL and then pushing to the server solves the problem.

Remarkable plugins for ActsAsTaggableOn, Paperclip and ThinkingSphinx

I created this three Remarkable plugins so they could help me in a project I’m working on:

Basically they let you write code like this:

describe User do
  should_act_as_taggable_on :categories
 
  should_have_attached_file :logo
 
  should_index 'addresses.city', :as => :address
  should_have_index_attribute :priority
end

If this plugins helps you, please consider to recommend me at Working With Rails.

Cool tests in the Java Maven platform now possible with rspec-maven-plugin

I don’t like Maven at all but sometimes depending of the project and the people you need to use this. After playing with RSpec I stop considering any Java language framework acceptable for testing so for this project I’m working on right now I needed to find a solution that could integrate RSpec with Maven.

After googling I found this post from Bob McWhirter where he presented a official rspec-maven-plugin. I tried to use it but it wasn’t working so I forked it, fixed it and changed some things.

The forked version of rspec-maven-plugin is at GitHub and to use it you should:

install the plugin in the local repository (this is necessary only one time):

git clone git://github.com/dcrec1/rspec-maven-plugin.git
cd rspec-maven-plugin
mvn install

configure your project’s pom.xml adding this:

<plugin>
    <groupId>org.codehaus.mojo</groupId>
    <artifactId>rspec-maven-plugin</artifactId>
    <executions>
        <execution>
            <id>test</id>
            <phase>test</phase>
            <goals>
                <goal>spec</goal>
            </goals>
        </execution>
    </executions>
</plugin>

set the JRUBY_HOME system variable, for example:

export JRUBY_HOME=/opt/jruby-1.1.6/

The plugin will run in the test phase of the Maven build cycle.

Happy testing!

Common attributes translations with activerecord_i18n_defaults

The plugin activerecord_i18n_defaults lets you write locale’s files in a Rails 2.2 application like this:

pt-BR:
  activerecord:
    attributes:
      _all:
        login: "Identificação"
        name: "Nome"
      admin:
        age: "Idade"
      customer:
        nickname: "Apelido"
      user:
        email: "Endereço de e-mail"

instead of like this:

pt-BR:
  activerecord:
    attributes:
      admin:
        login: "Identificação"
        name: "Nome"
        age: "Idade"
      customer:
        login: "Identificação"
        name: "Nome"
        nickname: "Apelido"
      user:
        login: "Identificação"
        name: "Nome"
        email: "Endereço de e-mail"

Next time you need to create a locale’s file put your common attributes in the key _all and DRY.

If you like this plugin please consider to recommend me at Working With Rails.

Is “one assertion per test” a “false idol”?

A few days ago I wrote in my portuguese blog an article trying to demonstrate why the “one assertion per test” pattern is an excellent one. If you dont know much about this pattern, I suggest you to follow this link which is a post from Jay Fields talking about it.

Some people gave me some feedback in the comments and one of them pointed me to this post from the guys from thoughtbot where among other practices, they criticized the one I was trying to evangelize :)

I like arguments and in this case arguments are present, but I think they aren’t valid, we’ll see why.

6 database records created

The author of the posts shows a code written in Shoulda where before each test he creates a record in the database and claims that the pattern is bad because to follow it he needed to create six database records instead of just one. Now I ask, did anyone tell anybody to create a record before each test? In my first post talking about the pattern I wrote an example which contains the following code:

before :all do
  email = Email.new("morena@opensource.com")
end

Note that I wrote before :all and not before :each, that’s because independent of the pattern, the tests need to be smart. In the case you couldn’t specify in Shoulda a block to be executed before all tests, that’s a problem of the framework and not of the pattern.

TDD should feel rhythmic

In the same code talked before, the author wrote some tests consisting of one line (Shoulda macros) and others consisting of three lines. Then he argues that the pattern in discussion is bad because the code hasn’t rhythm and TDD should have rhythm. Does this have any sense? The pattern doesn’t talk about having tests in one line or three, just about having one assertion per test. This doesn’t have any relation with the pattern, I might have the same behavior without following it, like this:

should_respond_with_xml_for 'user' 
 
should 'include the user id and user name' do 
  assert_select 'id', @user.id.to_s 
  assert_select 'name', @user.name 
end

What about now? Can we say that Shoulda is bad because I wrote a code that I consider it has no rhyme? Are then Shoulda macros not that good because they take they rhythm of the code?

Test names are not intention-revealing

I’m not sure if I understood this argument very well but I think the author claimed that the more tests you have, the more you have to give specific names to them. Assuming I’m right in my understanding, the intent of the pattern is precisely this, to create tests that specify what they are testing and what the code should do. When you run your tests, I surely think it’s better to receive this:

a GET to /users/:id.xml should respond with success
a GET to /users/:id.xml should render template show
a GET to /users/:id.xml should find the correct User
a GET to /users/:id.xml should respond with xml for user
a GET to /users/:id.xml should include the user id
a GET to /users/:id.xml should include the user name

than

user show xml

I did write “user show xml” because the second code wrote by the author is a single test with the name “test_user_show_xml”. “user show xml” doesn’t tell us anything about the system. If I read this test I will think that the user must show a xml. The first case tells us much more about the system, it’s an executable specification.

If context/describe/should/it blocks want to make things shorter then why aren’t we using a library (Test::Unit) that makes them one line?

What relation does this have with the pattern? Again, the pattern just tells about having one assertion per test, not about leaving the code smaller. Moreover, I think this argument is good for the pattern. There are people who say that they don’t follow the pattern because they prefer to write less code. In this case, if you want to write less code then you should use Test::Unit :P

The author shows then the code written in Test::Unit that I talked about before, with a lot of asserts in only one test. I think this test is really bad because it doesn’t specify anything at all and to understand it you need to read the code, not just the title.

Finally, the post mentioned that the pattern is a solution for a problem that doesn’t exist, but the problem does really exists and it’s this:

Writing non-executable documentation of a code is very expensive because every change in the behavior of the code generates also a change in the documentation and that creates double work. Besides the double work, the non-executable documentation does not guarantee that the code does what it should. The solution to this is to create executable documentation and executable documentation means tests that specify and to create tests that specify the easiest solution is to follow “the one assert per test” pattern.

The GemHub way to create RubyGems.

What is GemHub?

GemHub is a gem that creates a simple directory structure to build a gem which is gonna be tested with RSpec and hosted on GitHub.

Why not use NewGem or Sow?

If you’re writing a gem and you’re planning to host it in GitHub then NewGem may be not the best option, because it was created with RubyForge in mind and creates a lot of stuff that with GitHub you don’t really need, like a history, post-installation and manifest files, a task folder for putting your tasks, a doc folder for documentation and some scripts generators.

When thinking in GitHub you don’t need a history file because you got Git’s commit history with a click. Also, you can write all kind of instructions in a Readme.textile and that information will appear in the repository site. If you need more detailed documentation, you can write them in the wiki. Finally, when writing a not so complex gem is much probably that you will not need any other task that GemHub’s defaults.

Sow, being a much more simpler option, also generates a history and manifest files. It does also create a Readme file, but with txt extension, what means not cool formatting for GitHub. A bin folder is created by Sow, too. The intention of the bin folder is to put scripts that will be executed in a terminal, what mosts gems don’t need.

Finally, both NewGem and Sow creates tests using Test::Unit. Writing tests with lots of asserts is very ugly when compared to RSpec’s cool expectations, so GemHub uses RSpec as they testing framework.

Working with GemHub

When executed:

gemhub gem_name

GemHub will create a spec/gem_name.rb file to specify your gem’s behavior, a lib/gem_name.rb to implement your gem specification, a README.textile file (with an MIT license included) to write whatever you want and a Rakefile with common tasks. In this Rakefile there are also the details from your gem (name, version, platform, etc).

      create  
      create  lib
      create  spec
      create  lib/gem_name.rb
      create  spec/gem_name_spec.rb
      create  README.textile
      create  Rakefile

Anytime before you get ready to create your gemspec file, you should open the Rakefile file and edit the gem details. With your Rakefile edited, you can run rake make_spec to generate the gemspec file of your gem, rake gem to build a gem from your gemspec file and rake install to install the built gem locally. It’s a good practice that before you push your code to GitHub you execute also rake specs to verify that your code operates as expected and rake gem to verify that your gemspec is OK, so GitHub can build your gem without problems.

Rocking your application with Rails 2.2 i18n support

I know Sven Fuchs wrote about the Rails i18n core API back in July (when Rails 2.2 was not yet released), but since then, the i18n API has changed a little and now that Rails 2.2 is out I’ll write the steps you need to rock your application with this new excellent support.

1. Configure the I18n module

You need to select the translations files the module will load and the default locale in case the user don’t specify one. As you need to do this only once, you can put the next code in the config/environment.rb file:

I18n.default_locale = "pt-BR"
I18n.load_path += Dir.glob("config/locales/*.yml")

Update: We should add elements to the load_path instead of assigning a new array to not overwrite Rails 2.2 provided paths.

2. Set the locale in each request

You need to tell the I18n module which locale it should use in the current process. This can be done in a filter, before the controller executes the action. A good place to do the filter should be the application controller:

before_filter :set_locale
 
def set_locale
  I18n.locale = params[:locale] || I18n.default_locale
end

3. Internationalize

From now on the only thing you need to do is translate and localize, so from any place in your application you can call the translate and localize methods (t and l are alias) , for example:

I18n.t 'store.title'
I18n.l Time.now

Now that we know how to get our applications internationalized, I’ll talk about the process.

How does the Rails 2.2 i18n work?

The I18n module stores a backend that implements the localize and translate methods. Every time we call I18n.translate or I18n.localize, the I18n module delegates this calls to the configured backend. Rails 2.2 comes with a default backend called Simple Backend.

The first time the Simple Backend internationalizes something it loads the internationalizations files defined in I18n.load_path .

The internationalizations files

The internationalizations files can be a Ruby Hash or a YAML script:

Ruby Hash

{
  :'pt-BR' => {
    :foo => {
      :bar => "baz"
    }
  }
}

YAML

pt-BR:
  foo:
    bar: baz

Reloading internationalizations files

If for any reason you are using the Simple Backend and you want to load/reload an internationalization file after the first internationalization has been done you can call the load_translations(*file_names) method:

I18n.backend.load_translations("config/locales/es-PE.yml", "config/locales/en-US.yml")

Update: This is considered a hack because no methods in the backend should be called directly and should only be used for testing purpose, when working with script/console for example.

Update2: Since this post there have been created a new method in the I18n module that reloads the translations:

I18n.reload!

This method is not considered a hack! :)

More information

If you want to know about interpolation, pluralization, default values and scopes you should read the corresponded sections in the post from Sven Fuchs. I’m not talking about this features here because nothing has changed and because Sven Fuchs did a great thing explaining them in a simple way.