Posts Tagged ‘release’
It’s done, finally!
Bidding farewell to my pet peeve
In my job at GK Software I have the pleasure of developing technology based on Eclipse. But those colleagues consuming my technology work on software that has no direct connection to Eclipse nor OSGi. Their build technology of choice is Maven (without tycho that is). So whenever their build touches my technology we are facing a “challenge”. It doesn’t make a big difference if they are just invoking a code generator built using Xtext etc or whether some Eclipse technology should actually be included in their application runtime.
Among many troubles, I recall one situation that really opened my eyes: one particular build had been running successfully for some time, until one day it was fubar. One Eclipse artifact could no longer be resolved. Followed long nights of searching why that artifact may have disappeared, but we reassured ourselves, nothing had disappeared. Quite to the contrary somewhere on the wide internet (Maven Central to be precise) a new artifact had appeared. So what? Well, that artifact was the same that we also had on our internal servers. Well, if it’s the same, what’s the buzz? It turned out it had a one-char difference in its version: instead of 1.2.3.v20140815 its version was 1.2.3-v20140815. Yes take a close look, there is a difference. Bottom line, with both almost-identical versions available, Maven couldn’t figure out what to do, maybe each was considered as worse than the other, to the effect that Maven simply failed to use either. Go figure.
More stories like this and I realized that relying on Eclipse artifacts in Maven builds was always at the mercy of some volunteers, who typically don’t have a long-term relationship to Eclipse, who filled in a major gap by uploading individual Eclipse artifacts to Maven Central (thanks to you volunteers, please don’t take it personally: I’m happy that your work is no longer needed). Anybody who has ever studied the differences between Maven and OSGi (wrt dependencies and building that is) will immediately see that there are many possible ways to represent Eclipse artifacts (OSGi bundles) in a Maven pom. The resulting “diversity” was one of my pet peeves in my job.
At this point I decided to be the next volunteer
who would screw up other people’s builds who would collaborate with the powers that be at Eclipse.org to produce the official uploads to Maven Central.
As of today, I can report that this dream has become reality, all relevant artifacts of Neon.2 that are produced by the Eclipse Project, are now “officially” available from Maven Central.
Bridging between universes
I should like to report some details of how our artifacts are mapped into the Maven world:
The main tool in this endeavour is the CBI aggregator, a model based tool for transforming p2 repositories in various ways. One of its capabilities is to create a Maven repository (a dual use repo actually, but the p2 side of this is immaterial to this story). That tool does a great job of extracting meta data from the p2 repo in order to create “meaningful” pom files, the key feature being: it copies all dependency information, which is originally authored in MANIFEST.MF, into corresponding declarations in the pom file.
Still a few things had to be settled, either by improving the tool, by fine tuning the input to the tool, or by some steps of post-processing the resulting Maven repo.
- Group IDs
While OSGi artifacts only have a single qualified Bundle-SymbolicName, Maven requires a two-part name: groupId x artifactId. It was easy to agree on using the full symbolic name for the artifactId, but what should the groups be? We settled on these three groups for the Eclipse Project:
- Version numbers
In Maven land, release versions have three segments, in OSGi we maintain a forth segment (qualifier) also for releases. To play by Maven rules, we decided to use three-part versions for our uploads to Maven Central. This emphasizes the strategy to only publish releases, for which the first three parts of the version are required to be unique.
- 3rd party dependencies
All non-Eclipse artifacts that we depend on should be referenced by their proper coordinates in Maven land. By default, the CBI aggregator assigns all artifacts to the synthetic group p2.osgi.bundle, but if s.o. depends on p2.osgi.bundle:org.junit this doesn’t make much sense. In particular, it must be avoided that projects consuming Eclipse artifacts will get the same 3rd party library under two different names (perhaps in different versions?). We identified 16 such libraries, and their proper coordinates.
- Source artifacts
Eclipse plug-ins have their source code in corresponding .source plug-ins. Maven has a similar convention, just using a “classifier” instead of appending to the artifact name. In Maven we conform to their convention, so that tools like m2e can correctly pick up the source code from any dependencies.
- Other meta data
Followed a hunt for project url, scm coordinates, artifact descriptions and related data. Much of this could be retrieved from our MANIFEST.MF files, some information is currently mapped using a static, manually maintained mapping. Other information like licences and organization are fully static during this process. In the end all was approved by the validation on OSSRH servers.
If you want to browse the resulting wealth, you may start at
Everything with fully qualified artifact names in these groups (and date of 2017-01-07 or newer) should be from the new, “official” upload.
This is just the beginning
The bug on which all this has been booked is
Bug 484004: Start publishing Eclipse platform artifacts to Maven central. See the word “Start”?
To follow-up tasks are already on the board:
(1) Migrate all the various scripts, tools, and models to the proper git repo of our releng project. At the end of the day, this process of transformation and upload should become a routine operation to be invoked by our favourite build meisters.
(2) Fix any quirks in the generated pom files. E.g., we already know that the process did not handle fragments in an optimal way. As a result, consuming SWT from the new upload is not straight forward.
Both issues should be handled in or off bug 510072, in the hope, that when we publish Neon.3 the new, “official” Maven coordinates of Eclipse artifacts will be even fit all all real world use. So: please test and report in the bug any problems you might find.
(3) I was careful to say “Eclipse Project”. We don’t yet have the magic wand to apply this to literally all artifacts produced in the Eclipse community. Perhaps s.o. will volunteer to apply the approach to everything from the Simultaneous Release? If we can publish 300+ artifacts, we can also publish 7000+, can’t we? 🙂
In a previous post I showed how the
tycho-compiler-jdt Maven plug-in can be used for compiling OT/J code with Maven.
Recently, I was asked how the same can be done for OT/Equinox projects. Given that we were already using parts from Tycho, this shouldn’t be so difficult, right?
Once you know the solution, finding the solution is indeed easy, also in this case. Here it is:
We use almost the same declaration as for plain OT/J applications:
So, what’s the difference? In both cases we need to adapt the
tycho-compiler-jdt plug-in because that’s where we replace the normal JDT compiler with the OT/J variant. However, for plain OT/J applications
tycho-compiler-jdt is pulled in as a dependency of
maven-compiler-plugin and must be adapted on this path of dependencies, whereas in Tycho projects
tycho-compiler-jdt is pulled in from
tycho-compiler-plugin. Apparently, the exclusion mechanism is sensitive to how exactly a plug-in is pulled into the build. Interesting.
Once I figured this out, I created and published a new version of our Maven support for Object Teams:
objectteams-parent-pom:2.1.1 — publishing Maven support for Object Teams 2.1.1 was overdue anyway 🙂
With the updated parent pom, a full OT/Equinox hello world pom now looks like this:
Looks pretty straight forward, right?
|To see the full OT/Equinox Hello World Example configured for Maven/Tycho simply import OTEquiTycho.zip as a project into your workspace.|
As part of the Juno Milestone 7 also Object Teams has delivered its Milestone 7.
As the main new feature in this milestone hot code replacement finally works when debugging OT/J or OT/Equinox applications.
What took us so long to make it work?
- Well, hot code replacement didn’t work out of the box because our load-time weaver only worked the first time each class was loaded. When trying to redefine a class in the VM the weaver was not called and thus class signatures could differ between the first loaded class and the redefined version. The VM would then reject the redefinition due to that signature change.
- Secondly, we didn’t address this issue earlier, because I suspected this would be pretty tricky to implement. When I finally started to work on it, reality proved me wrong: the fix was actually pretty simple 🙂
In fact the part that makes it work even in an Equinox setting is so generic that I proposed to migrate the implementation into either Equinox or PDE/Debug, let’s see if there is interest.
Now when you debug any Object Teams application, your code changes can be updated live in the running debug target – no matter if you are changing teams, roles or base classes. Together with our already great debugging support, this makes debugging Object Teams programs still faster.
More new features can be found in the New&Noteworthy (accumulated since the Indigo release).
The path behind us
Annotation-based null analysis has been added to Eclipse in several stages:
- Using OT/J for prototyping
- As discussed in this post, OT/J excelled once more in a complex development challenge: it solved the conflict between extremely tight integration and separate development without double maintenance. That part was real fun.
- Applying the prototype to numerous platforms
- Next I reported that only one binary deployment of the OT/J-based prototype sufficed to upgrade any of 12 different versions of the JDT to support null annotations — looks like a cool product line
- Pushing the prototype into the JDT/Core
- Next all of the JDT team (Core and UI) invested efforts to make the new feature an integral part of the JDT. Thanks to all for this great collaboration!
- Merging the changes into the OTDT
- Now, that the new stuff was mixed back into the plain-Java implementation of the JDT, it was no longer applicable to other variants, but the routine merge between JDT/Core HEAD and Object Teams automatically brought it back for us. With the OTDT 2.1 M4, annotation-based null analysis is integral part of the OTDT.
Where we are now
Regarding the JDT, others like Andrey, Deepak and Aysush have beaten me in blogging about the new coolness. It seems the feature even made it to become a top mention of the Eclipse SDK Juno M4. Thanks for spreading the word!
Two problems of OOP, and their solutions
Now, OT/J with null annotations is indeed an interesting mix, because it solves two inherent problems of object-oriented programming, which couldn’t differ more:
1.: NullPointerException is the most widespread and most embarrassing bug that we produce day after day, again and again. Pushing support for null annotations into the JDT has one major motivation: if you use the JDT but don’t use null annotations you’ll no longer have an excuse. For no good reasons your code will retain these miserable properties:
- It will throw those embarrassing NPEs.
- It doesn’t tell the reader about fundamental design decisions: which part of the code is responsible for handling which potential problems?
2.: Objectivity seems to be a central property on any approach that is based just on Objects. While so many other activities in software engineering are based on the insight that complex problems with many stakeholders involved can best be addressed using perspectives and views etc., OOP forces you to abandon all that: an object is an object is an object. Think of a very simple object: a File. Some part of the application will be interested in the content so it can decode the bytes and do s.t. meaningful with it, another part of the application (maybe an underlying framework) will mainly be interested in the path in the filesystem and how it can be protected against concurrent writing, still other parts don’t care about either but only let you send the thing over the net. By representing the “File” as an object, that object must have all properties that are relevant to any part of the application. It must be openable, lockable and sendable and whatnot. This yields
bloated objects and unnecessary, sometimes daunting dependencies. Inside the object all those different use cases it is involved in can not be separated!
With roles objectivity is replaced by a disciplined form of subjectivity: each part of the application will see the object with exactly those properties it needs, mediated by a specific role. New parts can add new properties to existing objects — but not in the unsafe style of dynamic languages, but strictly typed and checked. What does it mean for practical design challenges? E.g, direct support for feature oriented designs – the direct path to painless product lines etc.
Just like the dot, objectivity seems to be hardcoded into OOP. While null annotations make the dot safe(r), the roles and teams of OT/J add a new dimension to OOP where perspectives can be used directly in the implementation. Maybe it does make sense, to have both capabilities in one language 🙂 although one of them cleans up what should have been sorted out many decades ago while the other opens new doors towards the future of sustainable software designs.
The road ahead
The work on null annotations goes on. What we have in M4 is usable and I can only encourage adopters to start using it right now, but we still have an ambitious goal: eventually, the null analysis shall not only find some NPEs in your program, but eventually the absense of null related errors and warnings shall give the developer the guarantee that this piece of code will never throw NPE at runtime.
What’s missing towards that goal:
- Fields: we don’t yet support null annotations for fields. This is next on our plan, but one particular issue will require experimentation and feedback: how do we handle the initialization phase of an object, where fields start as being null? More on that soon.
- Libraries: we want to support null specifications for libraries that have no null annotations in their source code.
- JSR 308: only with JSR 308 will we be able to annotate all occurrences of types, like, e.g., the element type of a collection (think of
Please stay tuned as the feature evolves. Feedback including bug reports is very welcome!
Ah, and one more thing in the future: I finally have the opportunity to work out a cool tutorial with a fellow JDT committer: How To Train the JDT Dragon with Ayushman. Hope to see y’all in Reston!
Object Teams made the next step for even better integration in Eclipse: starting with today’s Indigo Milestone 3 we’re aboard! No further fiddling with URLs of update sites, simply select the Indigo site…
… and directly install the OTDT from there.
For those who haven’t tried Object Teams before, here are steps 1, 2 & 3:
- Walk through the Quick Start
- Read the basic concepts in our OT/J Primer
- Play with more Examples or browse some Object Teams Patterns
The main concern of this milestone release was catching up with the Indigo train, so the list of bugs fixed is shorter than usual. But, hey, by upgrading to Eclipse SDK 3.7M3, the OTDT supports also some new JDT coolness, like, e.g., improved static analysis (I happen to have a finger in this pie, too 🙂 ).
With each new release it is good to test compatibility with older software. So I loaded my previous experiment (“project medal”) into the new milestone. Much as I expected: that little adaptation of the JDT still works as intended also in the new version, which wasn’t possible with the original implementation. This again shows: in-place patching can never give you, what OT/Equinox excels in: evolvable, modular adaptation.
On a day like this (2010/07/07), it should be evident what is better than one or a few stars: a strong team!
But when you look at what happens in your software at runtime, all you see is a soup of individuals (called objects) running around all over the place (remember: standard module systems do not create boundaries around runtime objects!).
In all humbleness, are you surprised to learn that it was a German invention back in 2002, that objects should team up?
This is the first release after the move of the project from objectteams.org to Eclipse.org, at what point it is time to express a big thank you to all who helped along the way, Mentors, EMO, Legal, the Tools-PMC, and – of course – the many Contributors. This is: a team-effort!
Now you might say: that’s a pretty scattered team: some people at the Eclipse Foundation in Ottawa, some students in Berlin, people from Austin, and where-not. But that’s actually the point about a team: you start from a set of individuals who initially do not necessarily have any particular relationship. Then you create a team where each individual takes one particular role. This means you further specialize these existing individuals (you don’t want a team of 11 goal keepers, would you? Even with a Manual Neuer giving a perfect forward pass, it takes a Miroslav Klose to make the goal). And then you unite all members of the team towards a common goal, giving the team a new identity, so that team acts like one.
Still, each team member brings into the team the strengths of his particular background, meaning: the individuals do not completely disappear, but some properties of the individuals shine through when they play their roles in the team.
Now, what’s that got to do with software?
Given you already have a core of an application implemented, and now it’s your task to implemented one or two more user stories on top of the existing code. You look at the existing classes and mark those that in some way or other are related to what you need to implement. One user story relates to all classes marked red, another one to those marked greenish etc (and do expect overlap):
How do you implement the user story that involves all the red classes, such that the new implementation sits in a nice new module that concentrates on only this one task/user story?
Consider the red entities as plain individuals, they don’t know about the new task they should contribute to. Also keep in mind, that not all instances of those red classes will participate in the new user story. What we need to do is: specialize a few of those individuals so they can play particular roles wrt the new task and unite those roles within a new team.
If you live in flat-land, this is tremendously difficult, but if you’re able to just add one more dimension to the picture …
… the solution is very straight forward:
- The implementation of the red user story is a strong, cohesive team
- Its members are roles specialized in their particular sub-tasks.
- Each role relates (↓) to one individual from the application core and specializes what is already given towards what the team requires.
- How exactly each role relates to its base is declared using two kinds of atomic bindings: callout and callin method bindings (see, e.g., this post).
- These roles only affect the system as long as the team context is active, and activation happens per team instance (with options for fine-tuning).
- Other teams may be formed for other purposes / user stories (see the greenish team)
- Even if you start already with this 3-D picture, with Object Teams you can always add one more dimension to your architecture, if needed.