Wednesday, December 3, 2014

Things are getting spocky

As another year passes, I find myself privileged to say that I've been working in "agile" (we actually called it XP or eXtreme Programming, back then) since the dawn of the 21st century.  It'll be about 15 years as the new year dawns.  I was a part of an upstart team of fun and quirky dudes at a small startup, flush with capital.  If you can believe it, our angle was mobile development.  We got to work with Motorola's first GPS phone prototypes, and their Java MIDP-compliant VMs.  It was equal parts exciting and frustrating. 

Along with pair programming, using a cutting-edge IDE called "IntelliJ" and using a continuous integration server, we adhered strictly to test-driven development, using JUnit.  This was a fairly kooky practice at the time, and in an era predating things like annotations, unit tests had to extend from a base JUnit class.  

As the years continued to roll by, I stood by the practice of TDD and rejoiced at the introduction of new libraries and techniques.  Mocking out dependencies was a revelation, and I thought EasyMock was the bee's knees.  Then Mockito came along and blew the lid off of everything. 

For a good few years, I felt I'd plateaued -- writing what I thought were fantastic tests, laden with @Mock annotations and builder pattern-style callouts for the behavior I wanted to provide.  Not to mention the anonymous inner classes for flexible "matchers."  I stood dutifully by my practice of hammering out long and elaborate tests, many of which were insanely difficult to read at the end of it.  "At least I have regression protection," I would tell myself, and sneer at my fellows that eschewed testing. 

It wasn't until a couple of years ago that a new technology came around and turned my understanding of testing completely upside down.  This coincided with my first full-on brush with the Groovy language. 

This revelation was spock.  Of all of the technologies, libraries, techniques and platforms I've come across in the last several years, I'd call spock the best and most interesting one.  Spock is perfectly representative of what makes Groovy a great language.  It's also learned from the things that made testing in a world of dependency injection so painful.  

Rather than using something like JUnit and picking some sort of a mocking library, you get all of this with spock.  It has its own powerful mocking concept, taking full advantage of Groovy's proxying capabilities.  It also makes things like mocking out static methods a snap.  
The features and functionalities of spock aren't its greatest virtues, however.  What makes spock great is how expressive it is.  In case you're not familiar, this is what a spock test looks like:

def "Calls the payment service and returns the result"() {
  setup:
  PaymentController controller = new PaymentController()
  controller.paymentService = Mock(PaymentService)

  when:
  PaymentResult result = controller.pay("MyAccount", 55.50)

  then:
  1 * controller.paymentService.acceptPayment("MyAccount", 55.50) >> new PaymentResult(accepted: true)
  result.accepted
}

What you're seeing here is something that reads like prose.  And the beauty of it is that verification and behavior are captured in a single line.  Using the rather old school concept of labels, Spock is able to skip from block to block in the method.  No assertion statements are required in the verification block. 

Do you remember how this looked with JUnit? 

You would have to spell out the behavior before your method call with a chained statement, looking something like:

when(controller.pay(eq("MyAccount"), eq(55.50))).thenReturn(new PaymentResult())

Further down the line, you would have to verify the behavior

verify(controller).pay( ... ) 

And you'd have to call out the matching parameters once again.  If you're disciplined enough, you can keep your JUnit pretty tight.  But frankly, when I've had to work with legacy code in my post-spock career, I've found myself trying to cram everything into the BDD-style given/when/then flow.  

What surprises me about spock is that it hasn't taken off at an even greater speed.  I overheard some coworkers from a different project talking about spock today, and both of them were clearly unfamiliar with it.  I'll admit that I was reticent about embracing spock at first, but even then I recognized how readable and succinct the tests were.

Even if you're saddled with writing Java as your production code, I highly recommend tapping into spock as your testing platform.  There are a few gotchas (Mock/GroovyMock/Spy/GroovySpy) when you're strictly using Java, but it's well worth it.  

Remember, a good test will generally result in good, tight code. 

Thursday, November 20, 2014

Going the other way

As my career in software has trundled along, it's become increasingly important to me to try to give back to the community that has given me so much over the years.  From a sheer numbers perspective, most developers simply enjoy the largesse that the "givers" put forth.  Given, many of these contributors are paid to put these things out there as part of their day jobs, but the giving and sharing nature of the software community is one of the more remarkable -- and less discussed -- facets of the profession.  

My resolution for the last couple of years has been to share any code I write that could possibly be of use (and, of course, isn't proprietary) with the community.  The hope is that one of those tidbits can help somebody solve something, and maybe start a spark that can get something widespread going.  

As a part of this, I've been doing a bit of (for lack of a better word) "backsliding" to writing Java code.  As an ardent supporter of the Groovy language (and most other JVM languages), I rarely use the "mother language" to complete my day-to-day tasks.  But one thing I've realized as I've tried to get involved with the JVM-based community at large is that not everybody has this luxury.  Or indeed this opinion.  As much as I'd love to write all of my library code in Groovy, I realize that you're asking a little something of your users if you force them to include Groovy and its associated libraries as one of their dependencies.  Taking the long way around with Java results in a much cleaner set of dependencies for the end user.  

A library I offered recently was a Java implementation of the wildly-complicated "National Retail Federation 4-5-4 Calendar."  When I wrote it initially, it took advantage of Groovy's date shortcuts, with things like: 

Date aWeekLater = date + 6

There were a few bumps in the road with Map interactions, but the conversion over to Java was pretty painless.  To be honest, it was almost comforting in a way, like reminiscing about the old days with a friend.  I used to dutifully rattle off the long, repetitive lines (with the requisite semicolon at the end) and have that intuition of where I had to stick that cast statement to make things compile.  

The other plus is that you can use Spock to test the code, since it's all going to be bytecode anyway.  The library user can simply ignore the unit tests. 

I had a small sense of cockiness as I proceeded to my next conversion task (all of my code starts out as Groovy).  As part of a reporting suite I built, I put together a Java wrapper for Adobe's absolutely atrocious SiteCatalyst API.  I did my best to unfurl the insanely complex rat's nest of JSON that they return.  It was easy enough with Groovy -- JsonSlurper and JsonBuilder could marshal and unmarshal with the greatest of ease.  I decided it was time to unleash to the world, since I no longer had use for it, and also invested a pretty large amount of time figuring out how to get meaningful data out of Adobe's analytics vault.  

This where I started to appreciate the strength of Groovy.  With the single groovy-all dependency, you get everything.  Fantastic JSON support, and all of the wonderful shorthand for map access.  The work on this conversion is still in progress, but here's an example of where you can really save yourself with Groovy: 

    def requestStructure = [reportDescription: [
            reportSuiteID  : omnitureReport.reportSuiteID,
            dateFrom       : omnitureReport.dateFrom.format(OMNITURE_REPORT_DATE_FORMAT),
            dateTo         : omnitureReport.dateTo.format(OMNITURE_REPORT_DATE_FORMAT),
            metrics        : omnitureReport.metrics.
                    collectAll {[id: it.id, segments: it.segmentId ? [[id: it.segmentId]] : null]},
            sortBy         : omnitureReport.sortBy,
            dateGranularity: omnitureReport.granularity?.value(),
            elements       : omnitureReport.elements?.collectAll {
              [id            : it.id,
               top           : it.limit == -1 ? omnitureReport.limit : it.limit,
               search        : getSearchKeywords(it.elementTypeAndKeywordFilter),
               classification: it.classification ?: "",
               startingWith  : it.startingWith,
              ]
            },
            segments       : omnitureReport.segmentIds ? omnitureReport.segmentIds.collect{[id:it]} : null
    ]]
Nothing complicated, right?  In one statement, I'm building up the structure of a report with a handful of succinct expressions.  Let's just take a peek at how we'd do this in Java:

DateFormat dateFormat = new SimpleDateFormat(OMNITURE_REPORT_DATE_FORMAT);
Map<Map<String, ?>> requestStructure = new HashMap<>();
Map<String, ?> reportDescription = new HashMap<>();
requestStructure.put("reportDescription", reportDescription);
reportDescription.put("reportSuiteID", omnitureReport.getReportSuiteID());
reportDescription.put("dateFrom", dateFormat.format(omnitureReport.getDateFrom());

This is all pretty doable, but where the rails really come off are on some of the compound statements:

omnitureReport.metrics.collectAll { [id: it.id, segments: segmentId? [[id: it.segmentId]] : null ]}

How does this look in Java? I'd start by busting it out into a method: 

private List<Map<String, ?>> getMetrics(List<OmnitureRequestMetrics> metricsIn) {
  List<Map<String, ?>> result = new ArrayList<>();
  for (OmnitureRequestMetric metric: metricsIn) {
    Map<String, ?> row = new HashMap<>();
    result.add(row);
    row.put("id", metric.getId());
    if (metric.segmentId != null) {
      row.put("segments", Collections.singletonList(Collections.singletonMap("id", metric.getSegmentId()));
    } else {
      row.put("segments", Collections.EMPTY_MAP);
    }
  }
}

And frankly, it only gets worse from here.  

Thankfully, there are some wonderful JSON libraries out there, namely Jackson, which makes marshaling and unmarshaling pretty simple.  Not as simple as in Groovy, I'd dare say, but at least palatable. 

I suppose I should go and keep working on this conversion task... 

Thursday, November 6, 2014

... And now for something completely different

Like so many others in the IT community, I can directly trace my path into IT back to gaming.  I was 5 years old when my parents sprung on an 8086 PC clone.  It sounded like a vacuum cleaner when it booted up.  The dulcet tones of the boot sequence are hardwired into my brain even today.  First, a RAM check, then the hard drive would spin up and give a gratifying "BEEP-BEEP."  I'd say that my old man was actually quite a trailblazer -- he had a primitive modem in this behemoth and dialed in to his college's e-mail system on occasion.  None of that mattered to me, however (not until a while later anyway).  What I loved the most about that system were the games.  Much to my delight, the computer came with a pretty extensive array of games: a healthy combination of bootable 5.25" floppies and games installed on the hard drive. 

I cut my teeth on Moon Bugs, Bouncing Babies, J-Bird, Bushido, BurgerTime and a bunch of other ones that aren't even worth mentioning.  As the years went on, I started to feel crippled by the constraints of our old metal-clad behemoth.  You see, the main purpose of the PC was word processing.  For both my academic father and my aspiring siblings and me in our school projects.  I couldn't cajole my parents into fitting it with the hardware necessary to interface with a joystick, and it was equipped with a CGA monitor: a 4-color affair that was quickly eclipsed by EGA (16 colors) and VGA soon thereafter.  I clearly recall looking at software boxes at Radio Shack and seeing "Joystick Required," or "EGA/VGA" and feeling my heart sink, knowing that I was missing out on yet another game.  A lot of my friends were getting newer machines, equipped with the ubiquitous joystick hardware, and some even with the mind-blowing SoundBlaster card.  With this hardware, games had soundtracks and effects that were nothing short of staggering.  Coming from a machine with a "PC Speaker," this severely limited the possibilities.  Most game publishers didn't push the envelope much.  But occasionally, you'd see some brilliance on the part of the developers.  Two that come to mind are Where in Time is Carmen Sandiego? and the unlikely and addictive Street Fighting Man.

Like so many other families, my folks eventually allowed us to modernize, and we got a Pentium-powered Gateway machine and entered the "cyber age."  At this point, my interest in games really started to wane.  Development became much more interesting to me, and as first-person shooters became de rigeur, I bowed out from gaming essentially for good.  In later years, I would go to "guys' weekends" and be the only one that didn't want to get in on a networked game of Halo.  

My interest in games didn't really spark up again until the faux-musical games exploded in popularity.  I was a Guitar Hero phenom that could devastate nearly anybody.  And a few of the arcade-style games on the Wii would hold my attention.  My Wii, though, saw most of its action in "Virtual Console" mode, where I could play my old 8-bit favorites.  Blades of Steel was a frequent go-to, as was Punch Out. 

Mind you, this was YEARS before the iPhone.  I was shredding up the Guitar Hero with a wonderfully durable Nokia dumbphone in my pocket -- and I was happy about it.  When smartphones DID come around, I think it revealed to us what really worked with games.  

You see, I've noticed that a lot of iOS/Android games that have exploded in popularity of late have really leveraged that old 8-bit style.  Case in point -- Flappy Bird.  This brutally simplistic game came as close to plagiarizing NES graphics as possible: the pipes, the fishy-looking bird, the bright and simple sounds.  There's another game that I enjoyed playing for a while called The Firm.  It reminds me a lot of games I used to play on the old 8086.  

This got me to thinking.  What was the major turning point in gaming?  Where did it go wrong?  At least from the perspective of a person like me?  I really pinpoint it to the era of the affordable home PC.  I look back fondly to the Sunday paper inserts from Best Buy and Circuit City, toting ridiculously cheap Packard Bell systems with 15" CRT monitors flanked by cheap speakers.  Even with Windows 3.1 as an operating system, game publishers started to outkick their coverage.  Games like Myst really heralded the end for simpleton gamers like me.  I liked things simple and easy to learn.  Something that was minimal and insanely difficult to master.  I got a lot of mileage out of some Disney-licensed titles as a kid: Matterhorn Screamer and The Chase On Tom Sawyer's Island, the latter being a fanciful Pacman ripoff.  

Games like these didn't have ridiculous resolution capabilities and required very visible pixels to render their sprites.  This look was always very comforting to me.  And in a way, it's great to see it coming back with games like Flappy Bird and the Firm.  But what it really boils down to for me is simplicity in play.  I've really enjoyed the Plants vs Zombies games, and whiled away many hours on Angry Birds until they started to introduce complexities like gravity that overcomplicated things. 

Recently, I've been doing a lot of exploring of Abandonware and developing ideas for a game that I could develop and release to the mobile world.  If you're a Mac user like me, I highly recommend downloading Boxer.  It's a DOS emulator that is extremely well-acquitted when it comes to playing old games.  Certain games require some monkeying with processing power, and you can actually sandbag things pretty well to get them under control with cues from the program of where you're at with processing power (XT, 386, 486, etc).  It's really a lovely program. 

I also recommend doing a google search for "abandonware."  You'll find that just about every game from your childhood is out there for the taking.  You'll be amazed at how well these emulators can recreate the experience.  

Monday, October 27, 2014

Springing into Mongo

Much has been made in recent years of document/NoSQL databases, and for very good reason. As the problems we face as engineers evolve, so do our storage needs -- or lack of storage needs.  The weight and unwieldy nature of relational databases simply isn't always the best answer.  Pair this with replication capabilities that work almost flawlessly out of the box, and the fact that mongo is completely free and open source makes it a pretty amazing alternative to the "traditional" solutions like Oracle and SQL Server, and further appetizing when you consider that the longstanding independent stalwart, MySQL, now resides in the grip of Oracle.  

So how can you reap the benefits of Mongo on your project?  Per the theme of my previous blog, we're going to stay in the comfortable confines of Spring and Spring Boot.  Spring Boot has a "starter" for Mongo that is incredibly instructive.

"Springing" off from this beginning, it's incredibly easy to apply the technology to the solution of your choice.  There's one minor shortcoming that needs to be patched up, however, and we will look at that a bit later on. 

For starters, let's look at how Mongo differs from standard JPA in spring.

My favorite Mongo visualizer is Robomongo, and I highly recommend having it around if you plan on working with Mongo extensively. 


Entities  

Much like JPA entities are annotated with @Entity, any item you want to store as a Mongo doc should be annotated as @Document.  Unlike JPA, however, you don't need to annotate every field that you'd like to persist.  They are persisted by default.  To apply an index to any column.  This is recommended for columns queried and accessed frequently.  Here's an example of a simple Mongo object:

import org.bson.types.ObjectId
import org.springframework.data.annotation.Id
import org.springframework.data.mongodb.*

@Document
class Item {

@Id
ObjectId id // ObjectId is s GUID-like unique ID that works well with mongo 

@Indexed
String itemIdentifier

Double price
}

When you boot up your spring context for the first time and start trying to put some data in, a collection of documents will be created.  I won't belabor the differences between Mongo and a relational DB too much more, but what's worth keeping in mind is that you don't have a uniform "table structure" as such.  When you're working with object models, you will find out relatively quickly if you're persisting things right, but you often won't get the sort of show-stopping ugly errors and exceptions that a SQL-based driver would give you if you tried to do something untoward.  That being said, let's take a brief look at how you configure Spring to connect to a mongo server and database, as I didn't find this to be well-documented on Spring's site:

import org.springframework.data.mongodb.*

@Configuration
@ComponentScan
@EnableMongoRepositories
class Config extends AbstractMongoConfiguration {

  protected String getDatabaseName() {
    return "myDb"
  }

  @Bean
  Mongo mongo() {
    return new Mongo('127.0.0.1:27017')
  }
}

When you combine this configuration with Spring Boot, you're pretty much good to go as far as connecting to your Mongo DB and doing the basic things.  But we all know you want to do more than the basics.  Thankfully, there are some very easy ways to pep things up.  As I mentioned in my prior blog, it's extremely easy to enable repositories in Spring.  It's pretty much the same thing with Mongo, though you need to add an @EnableMongoRepositories annotation to your configuration. From here, it's exactly the same exercise as it is with Spring JPA


Repositories 

With Mongo, your repository will look identical to its JPA equivalent:

import org.bson.types.ObjectId
import org.springframework.data.repository.*

interface ItemRepository extends PagingAndSortingRepository<Item, ObjectId> {
  Item save(Item item)

  Item findOne(ObjectId id)

  Page<Item> findAll(Pageable options)
}

I'm also pleased to report that you can use Spring's excellent @RepositoryRestResource annotation to effortlessly ReST-enable your data layer.  I probably gushed about this enough in my last post, but this really eliminates the need to write a service layer in large part.  Like so many other things in Spring, this really saves you time when you're getting a project rolling. 


Gotchas: Parent-Child relationships

In my experience with Mongo, the one hang-up I bumped into was Mongo's (or rather Spring Mongo's) implementation of parent-child relationships.  Strictly speaking, they don't exist.  Mind you, there IS a @DbRef annotation that tells the framework to establish a connection between collections, but in my experience, it was a bit of a poor substitute for something like @OneToMany and @ManyToOne.  This being said, I stumbled upon a functional gap that most people seem to circumvent in this scenario.  You see, the commonly-accepted way to persist children in Mongo is to simply ship the entire collection of children in with the parent object.  Mongo supports arrays -- so this can be reasonably efficient if your document is sized-reasonably.  My use case involved a small parent document with vast numbers of children.  So I needed to link one collection to another, but I also needed to be able to get all of the children in one shot, and more importantly, I wanted to be able to add a child without pulling down the entire parent document first.  This would have been even more difficult in a concurrent or asynchronous setting.  Thankfully, with a little bit of code and -- admittedly -- some trial and error, I was able to come up with a workable solution.  And I'll admit, it took me on a trip down memory lane.  As you've seen here, my examples are all in Groovy.  I rarely code in Java anymore if I can avoid it, and in putting this solution together, I was required to tap into some old friends from the Java/Spring way.  Without further ado, here's the code: 

@Retention(RetentionPolicy.RUNTIME)
@Target([ ElementType.FIELD ])
public @interface Parent {
}

------
import com.mongodb.DBObject
import com.rocksoft.example.domain.Parent // see above
import org.springframework.beans.factory.*
import org.springframework.data.mongodb.*
import org.springframework.stereotype.Component
import org.springframework.util.ReflectionUtils
import java.lang.reflect.Field


@Component
class MongoListener extends AbstractMongoEventListener {

  @Autowired
  MongoOperations mongoOperations

  public void onAfterSave(final Object source, DBObject dbo) {
    ReflectionUtils.doWithFields(source.class, 
      new ReflectionUtils.FieldCallback() {
      void doWith(Field field) {
        if (field.isAnnotationPresent(DBRef) &&
            field.isAnnotationPresent(Parent)) {
          ReflectionUtils.makeAccessible(field)
          def fieldValue = field.get(source)
          Field parentField = fieldValue.class.declaredFields.find {
            (it.genericType?.hasProperty('actualTypeArguments') &&
             it.genericType?.actualTypeArguments?.first() ==      
             source.class) || it.class == source.class
          }

          ReflectionUtils.makeAccessible(parentField)
          if (Collection.isAssignableFrom(parentField.type)) {
            Collection value = parentField.get(fieldValue)
            if (!value) {
              value = []
            }

            value << source
            fieldValue."$parentField.name" = value
          } else {
            fieldValue."$parentField.name" = source
          }

          mongoOperations.save(fieldValue)
        }
      }
    })
  }

}
There is a bit of hullaballoo here, but I'll try to summarize what you're seeing.  First off, we owe the ease of doing this to the mongo listening capabilities in Spring. What we're doing, in short, is watching the objects that come in and seeing if it's called out as having a parent that needs to know about it.  To "tag" something as part of a parent-child relationship, we throw the annotation where we would normally have a @ManyToOne, as well as a @DbRef annotation, so Spring knows that it needs to link things up.  A lot of the code you're seeing simply finds the target field in the parent object.  This can be done via one-to-one or one-to-many.  This would need to be enhanced a bit for true production use, as a non-collection genericized type would cause the code to crash and burn.  Finally, we set the value back on the parent, and we use the framework-provided MongoOperations class from our context to plug the modified object back in. 

There's one final gotcha that had me wrapped around the axle.  In your parent @DbRef annotation, you will need to add a lazy=true attribute. Failure to do so resulted in a StackOverflowError that I didn't spend too much time chasing, as my use case essentially screamed for lazy collections.  


In Closing

Hopefully this can save some time for those of you that are looking to make the jump to Mongo for persistence.  The great news is that once you've laid the foundation down with Spring, it's really pretty easy to swap out your database underneath.  Mongo puts reliable replication and extremely flexible storage options at your fingertips.  But as with everything else, I strongly urge you to look at the costs and benefits. For the parent/child requirement, I think a relational database is probably a better option in the end, simply because it doesn't require the "duct tape" I shared above.  I ended up settling on PostgreSQL, another open source and otherwise-unaffiliated alternative that is quite easy to install and maintain.

Wednesday, October 22, 2014

The Power Of Spring Boot

Spring Boot is described by its creators as an "opinionated" way to go about building  production-ready Spring applications.  As a long-time advocate of this ever-evolving, incredibly handy framework, I was wielding a hammer and searching for nails.  When I finally started a suitable project, I was eager to dig into to see what kind of savings in time and sanity I could reap. 

Where Spring has always come in incredibly handy is by simplifying the things we do with Java and taking some of the guesswork out of these common operations.  A lot of us in the Java community have taken it a step further by adopting Groovy (or another JVM language) as our main coding mechanism.  The deal gets even more sweet when you mix in Spring's dependency injection capabilities as well as its myriad framework features. 

Let's face it: if you're an engineer building "business" apps, most of what you're going to do is take data from one place and put it into another.  There are a lot of creative ways to do this.  I'll admit that I've occasionally gone overly elaborate when it wasn't necessary, but let's face it: boredom and repetition force us to some strange things sometimes.  What I've seen with Spring Boot so far is really changing the entire discourse as it concerns these mundane activities.  It frees us up to do things that are less repetitive. And potentially more creative! 


Step 0: Bootstrapping

Using the build automation tool of your choice (I like Gradle), follow the guidelines on Spring's site to make a simple build script.  Also, create a simple bean annotated with @Configuration and @ComponentScan


Step 1: Persistence

We usually start with some sort of a data model.  This has been exceedingly easy with Spring.  You have some tables that you need to represent as objects in your code?  No problem.

import javax.persistence.*

@Table(name = "item")
class Item {
 @Id
 @Column(name = "item_id")
 Long id

 @Column(name = "item_name")
 String name

}


Step 2: Make a repository

This is where it starts to get pretty cool.  The @Repository stereotype has really evolved over time.  To make a fully functional repository that can be wired in for data access, just make an interface that looks something like this: 

import org.springframework.data.repository.*

interface ItemRepository extends PagingAndSortingRepository<Item, Long> {

 Item save(Item item)
 Item findOne(Long id)
 Page<Item> findAll(Pageable options)

}

You see we used the souped-up PagingAndSortingRepository, but there are other options that are slightly more generic.  It all depends upon what you need.  At this point, you could make a rudimentary Spring app that accessed your database, just by wiring in your repo. 

.
.
.
@Autowired
ItemRepository repo

void foo() {
  repo.save(new Item(id: 1L, name: 'foobar'))
  assert repo.findOne(1L).name == 'foobar'
}
.
.
.

Step 3: Now it gets really cool

This is all fine and dandy, but not super practical.  What we'd normally do at this point is whip up some kind of a ReST service that would broker the situation for us.  The fine folks behind Spring realize that we're sick of doing this repeatedly, and they've made it insanely easy.  Enter Spring Data REST.  We can modify what we have above incredibly easily.  If you're using Gradle, you just have to make sure you have the right dependencies:

    compile("org.springframework.boot:spring-boot-starter-data-rest")
    compile("org.springframework.boot:spring-boot-starter-data-jpa")

Then add another annotation to your config: 

@Import(RepositoryRestMvcConfiguration)

Then make a few minor alterations to your repo: 

import org.springframework.data.repository.*
import org.springframework.data.rest.*

@RepositoryRestResource(path = "item")
interface ItemRepository extends PagingAndSortingRepository<Item, Long> {

 Item save(@Param("item") Item item)
 Item findOne(Long id)
 Page<Item> findAll(Pageable options)

}

Start up your application, and you'll have an endpoint hanging off of your context root at "item."  If you access it, you'll see that you get a JSON response.  What's pretty nifty here is that it will only support the ReST equivalent of the operations you've called out.  See the Spring site for a full list.  Given the code we've sketched out, you can do a GET, one with an ID parameter and one without, and a POST.  If you POST directly to the "item" endpoint, you'll see some data saved.  Your body would look something like this: 

{
"id": 2,
"name": "foobar"
}

This is really what makes this framework cool.  For a large percentage of the apps, there's simply no need to even write a services layer.  Model up your objects, spell out your data operations in an interface, and you're set.  No more boilerplate madness. 


Things to consider

In closing, if you're thinking about making the data situation a bit more robust, bear in mind that Spring Data REST deals in "links" when modeling relationships.  In referring to an object, you have to provide its URL.  It appears to me that Spring just peels the meaningful part off of the URL you provide, but I haven't dug deep enough to guarantee it. 

A final thing that I hope can save some of you a bit of time: you can't persist child objects through the parent object's REST endpoint.  Say "Item" has "ChildItem" objects.  You will need to expose a ChildItem repository with its respective @RepositoryRestResource. This may seem unwieldy at first, but I think it's fair to have to call out the ReST operations you'd like the child object to support.  Say you want to create a child object: 

{
"id": 4,
"parent": "http://my.site.com/item/2",
"foo": "bar"
}

This was another thing that had me running all over the place and didn't seem to be well-documented.  

Happy Springing!