Tuesday, March 30, 2010

Git with SVN by Praveen MV

Introduction
For the uninitiated, Git was invented by Linus Torvalds to support the development of the Linux Kernel, but it has since proven valuable to a wide range of projects. There are two git packages for windows: Cygwin based git and msysGit.

Git concepts

Repository: An archive of what the working tree looked like at different times in the past — whether on your machine or someone else’s. It’s a history of snapshots, and allows you to compare your working tree to those past states.

The index: Git does not commit changes directly from the working tree into the repository. Instead, changes are first registered in something called the index. Think of it as a way of “confirming” your changes, one by one, before doing a commit (which records all your approved changes at once).

Working tree: Any directory on your file system which has a repository associated with it (typically indicated by the presence of a sub-directory within it named .git.). It includes all the files and sub-directories in that directory.

Branch: Just a name for a commit, also called a reference. It’s the parentage of a commit which defines its history, and thus the typical notion of a “branch of development”.

Master: The mainline of development in most repositories is done on a branch called “master”. Although this is a typical default, it is in no way special.

HEAD: HEAD refers to the most recent commit of the last branch you checked out. Thus, if you are currently on the master branch, the words master and HEAD would be equivalent.

Getting started

Lets start of by seeing how we can checkout exisiting repository , make modifications and check in .Checking out repository is as simple as :

git svn clone
This might take a long time based on how old the codebase is . It gets information of every thing that happened on the repository , all the logs etc .
However the simplest and fastest way to check out exisiting repository is to copy .git folder from any one who has code checked out. Using git bash, navigate to folder containing .git folder and execute the following command

git checkout
Now we are ready with our local repository .
Next as developers we modify the code, add new feature . Now we need to commit . So lets see how to commit a code . Use the following comand to view the modification made to the source code.
This comand shows all the new and modified files .

git status
Now we need to add all the new and modified files to list of files to be committed . Use the following:

git add .
We can also add one file at a time using

git add filename
Now we must commit the file added to our local repository

git commit -m "Message while commiting"
So far we have commited to our local repository . Now we must take updates from the main repository to see if there are any modification or confilicts .

git svn rebase
This gets the latest files from the repository and updates the modification made by you on to the latest file .Assuming there are no confilicts. Now we can commit our code to the central repository .
so we run

git svn –dcommit
We are done with our first commit .We have covered the basic flow of git .

Sunday, January 31, 2010

Do high-performing teams always need retrospectives?

by Sumeet Moghe

A few months back Patrick Kua wrote a blog conjecturing if the lack of retrospectives is really a smell. Pat mentioned how he was very lucky that the team had some really strong people who got things done and kept the project 'continuously improving'. To quote Patrick, “It’s amazing what a bunch of energized, passionate and people with the “solve the right problem once” attitude can achieve."

Reminiscing this post from Pat, I had a few thoughts. Given us agree that:
• retrospectives are a 'best practice';
• and that they are a tool for improvement;

Is it fair for us to say (at least theoretically) that we can take this 'best practice' to an extreme level just as rationale behind extreme programming may suggest?
• Second, Agile methods (at least theoretically) assume a team composed of the 'best people' who are 'generalizing specialists' or 'versatilists'.
• The key to having a really strong team could then be mechanisms that not only encourage strong communication, but also those that allow teams to recognize problems, find solutions. That then will fuel continuous improvement, perhaps making retrospectives purely an optional ritual.

What could these mechanisms be?
• My passion for retrospectives set aside, I realize that the practice is definitely more than a decade old in the mainstream. Things have changed significantly since then.
• More teams understand the value of solving problems 'just in time';
• Command and control leadership may have not disappeared from the horizon, but leaders are slowly discovering their roles in empowering their teams to take more control of situations and problems.
• Technology is changing fast and our ability to use tools to make problems visible solve them is fast increasing.
• Here are a few ideas I had to increase communication and to recognize and solve problems on a team. These ideas don't necessarily negate the requirement of a retrospective, but they can perhaps take us one step further to being high performing teams.

A low tech method - daily 'hot topics'

A few months back, we were a team of 7 people with Ritin Tandon at the helm as the team lead. Ritin devised a method for us to recognize issues and solve them on an ongoing basis. In the team area, Ritin put up a flip chart called "Hot Topics". Every time anyone in the team had something to discuss or a non-urgent problem to solve, they'd put up a sticky on the flipchart. At the end of the day, one of us (often Ritin) would facilitate a quick discussion around our hot topics and we'd volunteer to solve the problems then and there. If we expected that a problem would take time to solve, then one or more of us would sign up to work on it and we kept reporting back progress to the team. It has been a fantastic practice and for the investment of a few minutes each day, we got a huge sense of fulfillment by taking blockers out of our way. What we were doing was a bit of a mini-retrospective each day and that helped us be a continuously improving team.

A hi-tech method - use Web 2.0 tools to surface and resolve problems.

There are quite a few tools these days that can help create high quality communication in teams. Two tools that I think can be really useful to surface and resolve problems in a team are Google Wave and Google Moderator. And of course there’s the indigenous www.ideaboardz.com.

Google Wave
Google Wave follows the paradigm of blips. It could be quite easy to create retrospective playground on Google Wave where you create brainstorming blips (Keep Doing, Stop Doing, etc) on the wave and people can add their thoughts and following discussion under those blips. In fact I think this could be even better than a ritual retrospective where we often don't discuss issues because of a lack of time. Using this method, people can actually choose to comment on every issue they feel passionately about instead of restraining themselves only because others don't see the value in their thoughts yet!

Google Moderator
Google Moderator is a great social application to crowd source ideas. You could potentially ask your team an open ended question about ideas for improving the project. As the team posts it ideas, members can vote up the ideas they like the most and provide commentary on its implementation. Over time, you have a nice prioritized list of improvement activities for your project. As you implement these ideas, the burning need for a retrospective may disappear.

Sunday, December 13, 2009

Concurrency, Time and Clojure

by Suresh Harikrishnan

Download this one-page Geek snack episode, and place it at your snack area.

Have you ever wondered what concurrency and time related constructs your favorite programming language provides?

I did, after watching this wonderful presentation by Rich Hickey:
http://www.infoq.com/presentations/Are-We-There-Yet-Rich-Hickey

Java supports concurrency using its Threads library. There are a few constructs supporting multi-threading, but mostly to ensure you play safe when using threads. These constructs are available to protect the mutable state in objects. And Java's notion of time is limited to a few (rather awful) classes in its library. Rich Hickey uses the example of athletes running a race to show the problem of concurrency with most languages in use today. If you want to know who is leading a race, you don't ask the athletes to stop running. The athletes keep running as you notice the current standings. Simply put, Java and other popular languages presumes single shared timeline.

Compare this with Clojure. Clojure's approach to concurrency is provided by 2 distinct features - functional approach to its data structures and different styles of concurrency constructs. It's pure functions are time independent, in other words, side effects free. Clojure data structures are immutable, but persistent. It differentiates "Value" from the notion of identity. Most OO languages treat them as one - an Account object could have no balance, and then when you deposit something into this account, it takes a positive balance. Clojure on the other hand would differentiate the account identity from the actual balance in that account.


















Only the references are mutable. And Clojure supports 3 types of mutable references:
•Synchronous coordinated changes between threads using refs.
◦Clojure uses STM for ref modifications.
◦You need to be inside a transaction context to change refs
•Asynchronous coordinated changes between threads using agents.
◦Ala Actor models
◦You pass a function to change the state of the agent
•Isolate changes within a thread using vars.
◦Equivalent to thread locals.

Check out more about clojure at http://clojure.org. There is lot more to say about clojure, maybe later :)

Wednesday, November 18, 2009

Java – Bridges in Generics - By Venkat RS

by Venkatesh R S

Download this one-page Geek snack episode, and place it at your snack area.

In the non-Generics Java world (JDK 1.4 or before) we would have noticed all wrapper classes that implement Comparable interface have got two compareTo methods as shown below:

public interface Comparable {

public int compareTo( object obj );

}

public final class Long extends Number implements Comparable {

//Override

public int compareTo( Object obj )

{

return compareTo( (Long)obj );

}

public int compareTo( Long anotherLong) {

//logic for comparing two Long objects.

return result;

}

}


A convenient method taking in an argument of type Long for comparison.

And the one that’s implemented as a result of implementing Comparable interface which takes in an argument of type Object. This method internally casts the incoming object to the given class type (Long) and delegates the call to the convenient compareTo method as shown above. If it couldn’t cast, then a ClassCastException is thown. We call this method as ‘bridge‘ method.

But Post Java 5, with introduction of Generics and type safety, things have improved and we no more need the bridge method, compareTo(Object o) and doesn’t have to worry about any ClassCastException anymore. The implementation of the wrapper class, Long, in Java 5 or above looks as follows:


public final class Long extends Number implements Comparable<Long> {

@Override

public int compareTo(Long anotherLong) {

//logic for comparing to Long objects

return result;

}

}


But hold on second, isn’t Java 5 and above compilers has got something called type erasure, a process where the compiler will remove all the information related to type parameters and type arguments within a class or method for the sake of being binary compatible with Java libraries/applications that were created before generics?


Doesn’t it mean that the above Java 5 Long code after compilation should get translated as it is in the Java 1.4 versions?


If that’s the case, where does the bridge method go which maintains the contract between Long and Comparable interface?


Things are suppose to break here. But it actually doesn’t why?

That’s where ‘Bridges‘ in Generics comes into picture. When the compiler translates the code for binary compatibility with older applications, it also adds the required bridge methods automatically in order to sustain the implementation contracts. In this case the contract is between Comparable and the class(Long) that is implementing it.

The following snippet of reflection code for the Long.class should reveal the secret.

final Method[] methods = Long.class.getDeclaredMethods();

for (Method method : methods) {

System.out.println(method.toString() +

" - IsBrige?:" + method.isBridge());

}


Output: .....


public int

java.lang.Long.compareTo(java.lang.Long) - IsBrige?:false

public int

java.lang.Long.compareTo(java.lang.Object) - IsBrige?:true



Tuesday, September 8, 2009

Does it Really Work?

by Francisco Trindade


Download this one-page Geek snack episode, and place it at your snack area.


Some time ago, during a TW London Thursday event, I had the pleasure to see the Agile Methods and User Centered Design presentation from Dave Robertson and John Johnston (or at least part of it), about how Agile and User Centered design are more a match, sharing goals and values, than different approaches to software development.


If you have some time you should really watch it, it is worth the time.


The overall presentation is really good, but the reason I’m posting here is one specific point that was mentioned, which I believe really hit the spot, and that’s when they say we should rethink the word work in the “the simplest thing that could possibly work” sentence.


This point goes back to the Agile Vs Usability discussion and it is very correct IMO, because it reiterates that development teams should not deliver any code just because it was quick to develop it and the client is happy (although he shouldn’t be at all) since it didn’t cost a fortune.


And what is interesting about this subject is how agile teams don’t usually accept low quality code standards (code without tests, lots of hacks, etc..), but easily accept low usability standards, not understanding that is also their responsibility to define what a good user experience is.


What I’m NOT trying to say is that the user should be left outside from the application design. He should definitely have his opinion (and a strong one), but should also receive advice in UX standards as much as he should in code quality, making sure that he understands what he loses when is trying to save money on each particular feature.


(read more at Franciscos’s Blog).




Tuesday, August 4, 2009

Measuring Value of Automation Tests

by Preetam Reddy

The value of test automation is often described in terms of the cost benefits due to reduction in manual testing effort (and the resources needed thereof) and also their ability to give fast feedback. However, this is based on a key assumption that the automated tests are serving their primary purpose – to repeatedly, consistently, and quickly validate that the application is within the threshold of acceptable defects.


Since it is impossible to know most of the defects in an application without using it over a period of time (either by a manual testing team or by users in production), we will need statistical concepts and models to help us design and confirm that the automated tests are indeed serving their primary purpose... (read more at Preetam’s blog).



Download this Geek Snack episode here.


Wednesday, July 15, 2009

Velocity gone wrong #1: Done is not done

by Danilo Sato

Dan North wrote an interesting post about the perils of estimation, questioning our approach to inceptions, release planning, and setting expectations about scope. This made me think about the implications of those factors once a project starts, and I came up with some anti-patterns on the usage of velocity to track progress. This is my first attempt at writing about them.

Before we start, it’s important to understand what velocity means. My simple definition of velocity is the total number of estimation units for the items delivered in an iteration. Estimation unit can be whatever the team chooses: ideal days, hours, pomodoros, or story points. The nature of items may vary as well: features, use cases, and user stories are common choices. Iteration is a fixed amount of time where the team will work on delivering those items. Sounds simple? Well… there’s one concept that is commonly overlooked and that’s the source of the first anti-pattern: what does delivered means?

… (read more at Danilo’s blog).


Download this Geek Snack episode here.