The Object Teams Blog

Adding team spirit to your objects.

Posts Tagged ‘modularity

Book chapter published: Confined Roles and Decapsulation in Object Teams — Contradiction or Synergy?

leave a comment »

I strongly believe that for perfect modularity, encapsulation in plain Java is both too weak and too strong. This is the fundamental assumption behind a book chapter that has just been published by Springer.

The book is:
Aliasing in Object-Oriented Programming. Types, Analysis and Verification
My chapter is:
Confined Roles and Decapsulation in Object Teams — Contradiction or Synergy?

The concepts in this chapter relate back to the academic part of my career, but all of my more pragmatic tasks since those days indicate that we are still very far away from optimal modularity, and both mistakes are found in real world software: to permit access too openly and to restrict access too strictly. More often than not, it’s the same software exhibiting both mistakes at once.

For the reader unfamiliar with the notions of alias control etc., let me borrow from the introduction of the book:

Aliasing occurs when two or more references to an object exist within the object graph of a running program. Although aliasing is essential in object-oriented programming as it allows programmers to implement designs involving sharing, it is problematic because its presence makes it difficult to reason about the object at the end of an alias—via an alias, an object’s state can change underfoot.

978-3-642-36946-9

Aliasing, by the way, is one of the reasons, why analysis for @Nullable fields is a thorny issue. If alias control could be applied to @Nullable fields in Java, much better static analysis would be possible.


How is encapsulation in Java too weak?

Java only allows to protect code, not objects

This manifests at two levels:

Member access across instances

In a previous post I mentioned that the strongest encapsulation in Java – using the private modifier – doesn’t help to protect a person’s wallet against access from any other person. This is a legacy from the pre-object-oriented days of structured programming. In terms of encapsulation, a Java class is a module utterly unaware of the concept of instantiation. This defect is even more frustrating as better precedents (think of Eiffel) have been set before the inception of Java.

A type system that is aware of instances, not just classes, is a prerequisite for any kind of alias control.

Object access meets polymorphism

Assume you declare a class as private because you want to keep a particular thing absolutely local to the current module. Does such a declaration provide sufficient protection? No. That private class may just extend another – public – class or implement a public interface. By using polymorphism (an invisible type widening suffices) an instance of the private class can still be passed around in the entire program – by its public super type. As you can see, applying private at the class level, doesn’t make any objects private, only this particular class is. Since every class extends at least Object there is no way to confine an object to a given module; by widening all objects are always visible to all parts of the program. Put dynamic binding of method calls into the mix, and all kinds of “interesting” effects can be “achieved” on an object, whose class is “secured” by the private keyword.

The type system of OT/J supports instance-based protection.

Java’s deficiencies outlined above are overcome in OT/J by two major concepts:

Dependent types
Any role class is a property of the enclosing team instance. The type system allows reasoning about how a role is owned by this enclosing team instance. (read the spec: 1.2.2)
Confined roles
The possible leak by widening can be prevented by sub-classing a predefined role class Confined which does not extend Object. (read the spec: 7.2)

For details of the type system, why it is suitable for mending the given problems, and why it doesn’t hurt in day-to-day programming, I have to refer you to the book chapter.


How is encapsulation in Java too strict?

If you are a developer with a protective attitude towards “your” code, you will make a lot of things private. Good for you, you’ve created (relatively) well encapsulated software.
But when someone else is trying to make use of “your” code (re-use) in a slightly unanticipated setting (calling for unanticipated adaptations), guess what: s/he’ll curse you for your protective attitude.
Have you ever tried to re-use an existing class and failed, because some **** detail was private and there was simply no way to access or override that particular piece? When you’ve been in this situation before, you’ll know there are basically 2 answers:

  1. Give up, simply don’t use that (overly?) protective class and recode the thing (which more often than not causes a ripple effect: want to copy one method, end up copying 5 or more classes). Object-orientation is strong on re-use, heh?
  2. Use brute force and don’t tell anybody (tools that come in handy are: binary editing the class file, or calling Method.setAccessible(true)). I’m not quite sure why I keep thinking of Core Wars as I write this 🙂 .

OT/J opens doors for negotiation, rather than arms race & battle

The Object Teams solution rests on two pillars:

  1. Give a name to the act of disregarding an encapsulation boundary: decapsulation. Provide the technical means to punch tiny little holes into the armor of a class / an object. Make this explicit so you can talk and reason about the extent of decapsulation. (read the spec: 2.1.2(c))
  2. Separate technical possibility from questions of what is / should / shouln’t be allowed. Don’t let technology dictate rules but make it possible to formulate and enforce such rules on top of the existing technology.

Knowing that this topic is controversial I leave at this for now (a previous discussion in this field can be found here).


Putting it all together

  1. If you want to protect your objects, do so using concepts that are stronger than Java’s “private”.
  2. Using decapsulation where needed fosters effective re-use without the need for “speculative API”, i.e., making things public “just in case”.
  3. Dependent types and confined roles are a pragmatic entry into the realm of programs that can be statically analysed, and strong guarantees be given by such analysis.
  4. Don’t let technology dictate the rules, we need to put the relevant stakeholders back into focus: Module providers, application developers and end users have different views. Technology should just empower each of them to get their job done, and facilitate transparency and analyzability where desired.

Some of this may be surprising, some may sound like a purely academic exercise, but I’m convinced that the above ingredients as supported in OT/J support the development of complex modular systems, with an unprecedented low effective coupling.

FYI, here’re the section headings of my chapter:

  1. Many Faces of Modularity
  2. Confined Roles
    1. Safe Polymorphism
    2. From Confined Types to Confined Roles
    3. Adding Basic Flexibility to Confined Roles
  3. Non-hierarchical Structures
    1. Role Playing
    2. Translation Polymorphism
  4. Separate Worlds, Yet Connected
    1. Layered Designs
    2. Improving Encapsulation by Means of Decapsulation
    3. Zero Reference Roles
    4. Connecting Architecture Levels
  5. Instances and Dynamism
  6. Practical Experience
    1. Initial Comparative Study
    2. Application in Tool Smithing
  7. Related Work
    1. Nesting, Virtual Classes, Dependent Types
    2. Multi-dimensional Separation of Concerns
    3. Modules for Alias Control
  8. Conclusion

Written by Stephan Herrmann

March 30, 2013 at 22:11

Modularity at JavaOne 2012

leave a comment »

Those planning to attend JavaOne 2012 in San Francisco might be busy right now building their personal session schedule. So here’s what I have to offer:

On Wed., Oct. 3, I’ll speak about

Redefining Modularity and Reuse in Variants—
All with Object Teams

I'm Speaking at JavaOne 2012

I guess one of the goals behind moving Jigsaw out of the schedule for Java 8 was: to have more time to improve our understanding of modularity, right? So, why not join me when I discuss how much an object-oriented programming language can help to improve modularity.

When speaking of modularity, I don’t see this as a goal in itself, but as a means for facilitating reuse, and when speaking of reuse I’m particularly interested in reusing a module in a context that was not anticipated by the module provider (that’s what creates new value).

To give you an idea of my agenda, here’re the five main steps of this session:

  1. Warm-up: Debugging the proliferation of module concepts – will there ever be an end? Much of this follows the ideas in this post of mine.
  2. Reuse needs adaptation of existing modules for each particular usage scenario. Inheritance, OTOH, promises to support specialization. How come inheritance is not considered the primary architecture level concept for adapting reusable modules?
  3. Any attempts to apply inheritance as the sole adaptation mechanism in reuse mash-ups results in conflicts which I characterize as the Dominance of the Instantiator. To mend this contention inheritance needs a brother, with similar capabilities but much more dynamic and thus flexible.
  4. Modularity is about strictly enforced boundaries – good. However, the rules imposed by this regime can only deliver their true power if we’re also able to express limited exceptions.
  5. Given that creating variants and adapting existing modules is crucial for unanticipated reuse, how can those adaptations themselves be developed as modules?

At each step I will outline a powerful solution as provided by Object Teams.

See you in San Francisco!

5 items in your shopping cart Edit: The slides and the recording of this presentation are now available online.

Written by Stephan Herrmann

September 20, 2012 at 19:55

Object Teams with Null Annotations

with 5 comments

The recent release of Juno M4 brought an interesting combination: The Object Teams Development Tooling now natively supports annotation-based null analysis for Object Teams (OT/J). How about that? 🙂
NO NPE

The path behind us

Annotation-based null analysis has been added to Eclipse in several stages:

Using OT/J for prototyping
As discussed in this post, OT/J excelled once more in a complex development challenge: it solved the conflict between extremely tight integration and separate development without double maintenance. That part was real fun.
Applying the prototype to numerous platforms
Next I reported that only one binary deployment of the OT/J-based prototype sufficed to upgrade any of 12 different versions of the JDT to support null annotations — looks like a cool product line
Pushing the prototype into the JDT/Core
Next all of the JDT team (Core and UI) invested efforts to make the new feature an integral part of the JDT. Thanks to all for this great collaboration!
Merging the changes into the OTDT
Now, that the new stuff was mixed back into the plain-Java implementation of the JDT, it was no longer applicable to other variants, but the routine merge between JDT/Core HEAD and Object Teams automatically brought it back for us. With the OTDT 2.1 M4, annotation-based null analysis is integral part of the OTDT.

Where we are now

Regarding the JDT, others like Andrey, Deepak and Aysush have beaten me in blogging about the new coolness. It seems the feature even made it to become a top mention of the Eclipse SDK Juno M4. Thanks for spreading the word!

Ah, and thanks to FOSSLC you can now watch my ECE 2011 presentation on this topic.

Two problems of OOP, and their solutions

Now, OT/J with null annotations is indeed an interesting mix, because it solves two inherent problems of object-oriented programming, which couldn’t differ more:

1.: NullPointerException is the most widespread and most embarrassing bug that we produce day after day, again and again. Pushing support for null annotations into the JDT has one major motivation: if you use the JDT but don’t use null annotations you’ll no longer have an excuse. For no good reasons your code will retain these miserable properties:

  • It will throw those embarrassing NPEs.
  • It doesn’t tell the reader about fundamental design decisions: which part of the code is responsible for handling which potential problems?

Why is this problem inherent to OOP? The dangerous operator that causes the exception is this:

right, the tiny little dot. And that happens to be the least dispensable operator in OOP.

2.: Objectivity seems to be a central property on any approach that is based just on Objects. While so many other activities in software engineering are based on the insight that complex problems with many stakeholders involved can best be addressed using perspectives and views etc., OOP forces you to abandon all that: an object is an object is an object. Think of a very simple object: a File. Some part of the application will be interested in the content so it can decode the bytes and do s.t. meaningful with it, another part of the application (maybe an underlying framework) will mainly be interested in the path in the filesystem and how it can be protected against concurrent writing, still other parts don’t care about either but only let you send the thing over the net. By representing the “File” as an object, that object must have all properties that are relevant to any part of the application. It must be openable, lockable and sendable and whatnot. This yields
bloated objects and unnecessary, sometimes daunting dependencies. Inside the object all those different use cases it is involved in can not be separated!

With roles objectivity is replaced by a disciplined form of subjectivity: each part of the application will see the object with exactly those properties it needs, mediated by a specific role. New parts can add new properties to existing objects — but not in the unsafe style of dynamic languages, but strictly typed and checked. What does it mean for practical design challenges? E.g, direct support for feature oriented designs – the direct path to painless product lines etc.

Just like the dot, objectivity seems to be hardcoded into OOP. While null annotations make the dot safe(r), the roles and teams of OT/J add a new dimension to OOP where perspectives can be used directly in the implementation. Maybe it does make sense, to have both capabilities in one language 🙂 although one of them cleans up what should have been sorted out many decades ago while the other opens new doors towards the future of sustainable software designs.

The road ahead

The work on null annotations goes on. What we have in M4 is usable and I can only encourage adopters to start using it right now, but we still have an ambitious goal: eventually, the null analysis shall not only find some NPEs in your program, but eventually the absense of null related errors and warnings shall give the developer the guarantee that this piece of code will never throw NPE at runtime.

What’s missing towards that goal:

  1. Fields: we don’t yet support null annotations for fields. This is next on our plan, but one particular issue will require experimentation and feedback: how do we handle the initialization phase of an object, where fields start as being null? More on that soon.
  2. Libraries: we want to support null specifications for libraries that have no null annotations in their source code.
  3. JSR 308: only with JSR 308 will we be able to annotate all occurrences of types, like, e.g., the element type of a collection (think of List)

Please stay tuned as the feature evolves. Feedback including bug reports is very welcome!

Ah, and one more thing in the future: I finally have the opportunity to work out a cool tutorial with a fellow JDT committer: How To Train the JDT Dragon with Ayushman. Hope to see y’all in Reston!

Written by Stephan Herrmann

December 20, 2011 at 22:32

The Essence of Object Teams

leave a comment »

When I write about Object Teams and OT/J I easily get carried away indulging in cool technical details. Recently in the Object Teams forum we were asked about the essence of OT/J which made me realize that I had neglected the high-level picture for a while. Plus: this picture looks a bit different every time I look at it, so here is today’s answer in two steps: short and extended.

Short version:

OT/J adds to OO the capability to define a program in terms of multiple perspectives.

Extended version:

In software design, e.g., we are used to describing a system from multiple perspectives or views, like: structural vs. behavioral views, architectural vs. implementation views, views focusing on distribution vs. business logic, or just: one perspective per use case.

A collage of UML diagrams

© 2009, Kishorekumar 62, licensed under the Creative Commons Attribution-Share Alike 3.0 Unported license.

In regular OO programming this is not well supported, an object is defined by exactly one class and no matter from which perspective you look at it, it always has the exact same properties. The best we can do here is controlling the visibility of methods by means of interfaces.

A person is a person is a person!?

By contrast, in OT/J you start with a set of lean base objects and then you may attach specific roles for each perspective. Different use cases and their scenarios can be implemented by different roles. Visualization in the UI may be implemented by another independent set of roles, etc.

A base seen from multiple perspectives via roles.

Code-level comparison?

I’ve been asked to compare standard OO and OT/J by direct comparison of code examples, but I’m not sure if it is a good idea to do, because OT/J is not intended to just provide a nicer syntax for things you can do in the same way using OO. OT/J – as any serious new programming language should do IMHO – wants you to think differently about your design. For example, we can give an educational implementation of the Observer pattern using OT/J, but once you “think Object Teams” you may not be interested in the Observer pattern any more, because you can achieve basically the same effect using a single callin binding as shown in our Stopwatch example.

So, positively speaking, OT/J wants you to think in terms of perspectives and make the perspectives you would naturally use for describing the system explicit by means of roles.

OTOH, a new language is of course only worth the effort if you have a problem with existing approaches. The problems OT/J addresses are inherently difficult to discuss in small examples, because they are:

  1. software complexity, e.g., with standard OO finding a suitable decomposition where different concerns are not badly tangled with each other becomes notoriously difficult.
  2. software evolution, e.g., the initial design will be challenged with change requests that no-one can foresee up-front.

(Plus all the other concerns addressed)

Both, theory and experience, tell me that OT/J excels in both fields. If anyone wants to challenge this claim using your own example, please do so and post your findings, or lets discuss possible designs together.

One more essential concept

I should also mention the second essence in OT/J: after slicing elements of your program into roles and base objects we also support to re-group the pieces in a meaningful way: as roles are contained in teams, those teams enable you to raise the level of abstraction such that each user-relevant feature, e.g., can be captured in exactly one team, hiding all the details inside. As a result, for a high-level design view you no longer have to look at diagrams of hundreds of classes but maybe just a few tens of teams.

Hope this helps seeing the big picture. Enough hand-waving for today, back to the code! 🙂

Written by Stephan Herrmann

September 20, 2011 at 15:39

Combination of Concerns

with one comment

If you are interested in more than two of these …

… you’ll surely enjoy this:

Hands-on introduction to Object Teams

See you at

Written by Stephan Herrmann

March 17, 2011 at 16:52

Null annotations: prototyping without the pain

with 3 comments


So, I’ve been working on annotations @NonNull and @Nullable so that the Eclipse Java compiler can statically detect your NullPointerExceptions already during compile time (see also bug 186342).

By now it’s clear this new feature will not be shipped as part of Eclipse 3.7, but that needn’t stop you from trying it, as I have uploaded the thing as an OT/Equinox plugin.

Behind the scenes: Object Teams


Today’s post shall focus on how I built that plugin using Object Teams, because it greatly shows three advantages of this technology:

  • easier maintenance
  • easier deployment
  • easier development


Before I go into details, let me express a warm invitation to our EclipseCon tutorial on Thursday morning. We’ll be happy to guide your first steps in using OT/J for your most modular, most flexible and best maintainable code.

Maintenance without the pain

It was suggested that I should create a CVS branch for the null annotation support. This is a natural choice, of course. I chose differently, because I’m tired of double maintenance, I don’t want to spend my time applying patches from one branch to the other and mending merge conflicts. So I avoid it wherever possible. You don’t think this kind of compiler enhancement can be developed outside the HEAD stream of the compiler without incurring double maintenance? Yes it can. With OT/J we have the tight integration that is needed for implementing the feature while keeping the sources well separated.

The code for the annotation support even lives in a different source repository, but the runtime effect is the same as if all this already were an integral part of the JDT/Core. Maybe I should say, that for this particular task the integration using OT/J causes a somewhat noticable performance penalty. The compiler does an awful lot of work and hooking into this busy machine comes at a price. So yes, at the end of the day this should be re-integrated into the JDT/Core. But for the time being the OT/J solution well serves its purpose (and in most other situations you won’t even notice any impact on performance plus we already have further performance improvements in the OT/J runtime in our development pipeline).

Independent deployment

Had I created a branch, the only way to get this to you early adopters would have been via a patch feature. I do have some routine in deploying patch features but they have one big drawback: they create a tight dependency to the exact version of the feature which you are patching. That means, if you have the habit of always updating to the latest I-build of Eclipse I would have to provide a new patch feature for each individual I-build released at Eclipse!

Not so for OT/Equinox plug-ins: in this particular case I have a lower bound: the JDT/Core must be from a build ≥ 20110226. Other than that the same OT/J-based plug-in seemlessly integrates with any Eclipse build. You may wonder, how can I be so sure. There could be changes in the JDT/Core that could break the integration. Theoretically: yes. Actually, as a JDT/Core committer I’ll be the first to know about those changes. But most importantly: from many years’ experience of using this technology I know such breakage is very seldom and should a problem occur it can be fixed in the blink of an eye.

As a special treat the OT/J-based plug-in can even be enabled/disabled dynamically at runtime. The OT/Equinox runtime ships with the following introspection view:

Simply unchecking the second item dynamically disables all annotation based null analysis, consistently.

Enjoyable development

The Java compiler is a complex beast. And it’s not exactly small. Over 5 Mbytes of source spread over 323 classes in 13 packages. The central package of these (ast) comprising no less than 109 classes. To add insult to injury: each line of this code could easily get you puzzling for a day or two. It ain’t easy.

If you are a wizard of the patches feel free to look at the latest patch from the bug. Does that look like s.t. you’d like to work on? Not after you’ve seen how nice & clean things can be, I suppose.

First level: overview

Instead of unfolding the package explorer until it shows all relevant classes (at what time the scrollbar will probably be too small to grasp) a quick look into the outline suffices to see everything relevant:

Here we see one top-level class, actually a team class. The class encapsulates a number of role classes containing the actual implementation.

Navigation to the details

Each of those role classes is bound to an existing class of the compiler, like:

   protected class MessageSend playedBy MessageSend { ...
(Don’t worry about identical names, already from the syntax it is clear that the left identifier MessageSend denotes a role class in the current team, whereas the second MessageSend refers to an existing base class imported from some other package).

Ctrl-click on the right-hand class name takes you to that base class (the packages containing those base classes are indicated in the above screenshot). This way the team serves as the single point of reference from which each affected location in the base code can be reached with a single mouse click – no matter how widely scattered those locations are.

When drilling down into details a typical roles looks like this:

The 3 top items are  “callout” method bindings providing access to fields or methods of the base object. The bottom item is a regular method implementing the new analysis for this particular AST node, and the item above it defines a  “callin” binding which causes the new method to be executed after each execution of the corresponding base method.

Locality of new information flows

Since all these roles define almost normal classes and objects, additional state can easily be introduced as fields in role classes. In fact some relevant information flows of the implementation make use of role fields for passing analysis results directly from one role to another, i.e., the new analysis mostly happens by interaction among the roles of this team.

Selective views, e.g., on inheritance structures

As a final example consider the inheritance hierarchy of class Statement: In the original this is a rather large tree:

Way too large actually to be fully unfolded in a single screenshot. But for the implementation at hand most of these
classes are irrelevant. So at the role layer we’re happy to work with this reduced view:


This view is not obtained by any filtering in the IDE, but that’s indeed the real full inheritance tree of the role class Statement. This is just one little example of how OT/J supports the implementation of selective views. As a result, when developing the annotation based null analysis, the code immediately provides a focused view of everything that is relevant, where relevance is directly defined by design intention.

A tale from the real world

I hope I could give an impression of a real world application of OT/J. I couldn’t think of a nicer structure for a feature of this complexity based on an existing code base of this size and complexity. Its actually fun to work with such powerful concepts.

Did I already say? Don’t miss our EclipseCon tutorial 🙂

Hands-on introduction to Object Teams

See you at

Written by Stephan Herrmann

March 14, 2011 at 00:18

Object Teams rocks :)

with 2 comments

This is a story of code evolution at its best. About re-use and starting over.

During the last week or so I modernized a part of the Object Teams Development Tooling (OTDT) that had been developed some 5 years ago: the type hierarchy for OT/J. I’ll mention the basic requirements for this engine in a minute. While most of the OTDT succeeds in reusing functionality from the JDT, the type hierarchy was implemented as a full replacement of the original. This is a pretty involved little machine, which took weeks and months to get right. It provides its logic to components like Refactoring and the Type Hierarchy View.

On the one hand this engine worked well for most uses, but over so many years we did not succeed to solve two remaining issues:

Give a faithful implementation for getSuperclass()
This is tricky because a role class in OT/J can have more than one superclass. Failing to implement this method we could not support the “traditional” mode of the hierarchy view that shows both the tree of subclasses of a focus type plus the path of superclasses up to Object (this upwards path relies on getSuperclass).
Support region based hierarchies
Here the type hierarchy is not only computed for supertypes and subtypes of one given focus type, but full inheritance structure is computed for a set of types (a “region”). This strategy is used by many JDT Refactorings, and thus we could not precisely adapt some of these for OT/J.

In analyzing this situation I had to weigh these issues:

  • In its current state the implementation strategy was a show stopper for one mode of the type hierarchy view and for precise analysis in several refactorings.
  • Adding a region based variant of our hierarchy implementation would mean to re-invent lots of stuff, both from the JDT and from our own development.
  • All this seemed to suggest to discard our own implementation and start over from scratch.
Start over I did, but not from scratch but from the wealth of a working JDT implementation.

Object Teams to the rescue: Let’s re-build Rome in ten days.

As mentioned in my previous post, the strength of Object Teams lies in building layers: each module sits in one layer, and integration between layers is given by declarative bindings:

Applying this to the issue at hand we now actually have three layers with quite different structures:

Java Model

The bottom layer is the Java model that implements the containment tree of Jave elements: A project contains source folders, containing packages, containing compilation units, containing types containing members. In this model each Java type is represented by an instance of IType

Java Type Hierarchy

This engine from the JDT maintains the graph of inheritance information as a second way for navigating between ITypes. Interestingly, this module pretty closely simulates what Object Teams does natively, I may come back to that in a later post.

Object Teams Type Hierarchy

As an extension of Java, OT/J naturally supports the normal inheritance using extends, but there is a second way how an inheritance link can be established: based on inheritance of the enclosing team:

team class EcoSystem {
   protected class Project { }
   protected class IDEProject extends Project { }
}
team class Eclipse extends EcoSystem {
   @Override
   protected class Project { }
   @Override
   protected class IDEProject extends Project { }
}
 

Here, Eclipse.Project is an implicit subclass of EcoSystem.Project simply because Eclipse is a subclass of EcoSystem and both classes have the same simple name Project. I will not go into motivation and consequences of this language design (that’ll be a separate post — which I actually promised many weeks ago).

Looking at the technical challenge we see that the implicit inheritance in OT/J adds a third layer, in which classes are connected in yet another graph.

Three Layers — Three Graphs

Looking at the IType representation of Eclipse.IDEProject we can ask three questions:

Question Code Answer
What is your containing element? type.getParent() Eclipse
What is your superclass? hierarchy.getSuperclass(type) Eclipse.Project
What is your implicit superclass? ?? EcoSystem.Project

Each question is implemented in a different layer of the system. Things get a little complicated when asking a type for all its super types, which requires to collect the answers from both the JDT hierarchy layer and the OT hierarchy. Yet, the most tricky part was giving an implementation for getSuperclass().

An "Impossible" Requirement

There is a hidden assumption behind method getSuperclass() which is pervasive in large parts of the implementation, especially most refactorings:

When searching all methods that a type inherits from other types, looping over getSuperclass() until you reach Object will bring you to all the classes you need to consider, like so:

IType currentType = /* some init */;
while (currentType != null) {
   findMethods(currentType, /*some more arguments*/);
   currentType = hierarchy.getSuperclass(currentType);
}
 

There are lots and lots of places implemented using this pattern. But, how do you do that if a class has multiple superclasses?? I cannot change all the existing code to use recursive functions rather than this single loop!

Looking at Eclipse.IDEProject we have two direct superclasses: Eclipse.Project (normal inheritance, “extends”) and EcoSystem.IDEProject (OT/J implicit inheritance), which cannot both be answered by a single call to getSuperclass(). The programming language theory behind OT/J, however, has a simple answer: linearization. Thus, the superclasses of Eclipse.IDEProject are:

  • Eclipse.IDEProject → EcoSystem.IDEProject → Eclipse.Project → EcoSystem.Project

… in this order. And this is how this shall be rendered in the hierarchy view:

The final callenge: what should this query answer:

        getSuperclass(ecoSystemIDEProject);

According to the above linearization we should answer: Eclipse.Project, but only if we are in the context of the superclass chain of Eclipse.IDEProject. Talking directly to EcoSystem.IDEProject we should get EcoSystem.Project! In other words: the function needs to be smarter than what it can derive from its arguments.

Layer Instances for each Situation

Let’s go back to the layer thing:

At the bottom you see the Java model (as rendered by the package explorer). In the top layer you see the OT/J type hierarchy (lets forget about the middle layer for now). Two essential concepts can be illustrated by this picture:

  • Each layer is populated with objects and while each layer owns its objects, those objects connected with a red line between layers are almost the same, they represent the same concept.
  • The top layer can be instantiated multiple times: for each focus type you create a new OT/J hierarchy instance, populated with a fresh set of objects.

It is the second bullet that resolves the “impossible” requirement: the objects within each layer instance are wired differently, implementing different traversals. Depending on the focus type, each layer may answer the getSuperclass(type) question differently, even for the same argument.

The first bullet answers how these layers are integrated into a system: Conceptually we are speaking about the same Java model elements (IType), but we superimpose different graph structure depending on our current context.

All layers basically talk about the same objects,
but in each layer these objects are connected in a specific way as suites for the task at hand.

Inside the hierarchy layer, we actually do not handle IType instances directly, but we have roles that represent one given IType each. Those roles contain all the inheritance links needed for answering the various questions about inheritance relations (direct/indirect, explicit/implicit, super/sub).

A cool thing about Object Teams is, that having different sets of objects in different layers (Team teams) doesn’t make the program more complex, because I can pass an object from one layer into methods of another layer and the language will quite automagically translate into the object that sits at the other end of that red line in the picture above. Although each layer has its own view, they “know” that they are basically talking about the same stuff (sounds like real life, doesn’t it?).

Summing up

OK, I haven’t shown any code of the new hierarchy implementation (yet), but here’s a sketch of before-vs.-after:

Code Size
The new implementation of the hierarchy engine has about half the size of the previous implementation (because it need not repeat anything that’s already implemented in the Java hierarchy).
Integration
The previous implementation had to be individually integrated into each client module that normally uses Java hierarchies and then should use an OT hierarchy instead. After the re-implementation, the OT hierarchy is transparently integrated such that no clients need to be adapted (accounting for even more code that could be discarded).
Linearization
Using the new implementation, getSuperclass() answers the correct, context sensitive linearization, as shown in the screenshot above, which the old implementation failed to solve.
Region based hierarchies
The old implementation was incompatible with building a hierarchy for a region. For the new implementation it doesn’t matter whether it’s built for a single focus type or for a region, so, many clients now work better without any additional efforts.

The previous implementation only scratched at the surface – literally worked around the actual issue (which is: the Java type hierarchy is not aware of OT/J implicit inheritance). The new solution solves the issue right at its core: the new team OTTypeHierarchies assists the original type hierarchy (such that its answers indeed respect OT/J’s implicit inheritance). By performing this adaptation at the issue’s core, the solution automatically radiates to all clients. So I expect that investing a few days in re-writing the implementation will pay off in no time. Especially, improving the (already strong) refactoring support for OT/J is now much, much easier.

Lessons learned: when your understanding of a problem improves, you’ll be able to discard your old workarounds and move the solution closer to the core. This reduces code size, makes the solution more consistent, enables you to solve issues you previously weren’t able to solve, and transparently provides the solution to a wider range of client modules.
Moving your solution into the core could easily result in a design were a few bloated and tangled core modules do all the work, mocking the very idea of modularity. This can be avoided by a technology that is based on some concept of perspectives and self-contained layers, as supported by teams in OT/J.

Need I say, how much fun this re-write was? 🙂

Written by Stephan Herrmann

August 18, 2010 at 22:52