Friday, December 18, 2009

The TimeLabel component

The time label component is something I slapped together to show accumulated time. Its an enhanced ActionScript3 Label with a simple API for making it tick.

Input a numeric value of time in milliseconds, and the TimeLabel will display it grouped nicely in days, hours minutes and seconds.

Invoke the time label’s start/stop to make it count seconds in any direction you like.


This is the image of the TimeLabel, I surrounded with the controls that operate its API.

The API is as following:

Set and retrieve the numeric value representing time in milliseconds (note; there is no numeric validation, so its up to you):

  • function set data(value:Object):void
  • function get data():Object

Start and stop the timer:

  • function start():void
  • function stop():void

Determine if the timer goes up or down:

  • function increment():void
  • function decrement():void

download source code

Tuesday, November 17, 2009

Software Disclaimer Sample

I’ve found this text:

This SOFTWARE PRODUCT is provided by THE PROVIDER "as is" and "with all faults." THE PROVIDER makes no representations or warranties of any kind concerning the safety, suitability, lack of viruses, inaccuracies, typographical errors, or other harmful components of this SOFTWARE PRODUCT. There are inherent dangers in the use of any software, and you are solely responsible for determining whether this SOFTWARE PRODUCT is compatible with your equipment and other software installed on your equipment. You are also solely responsible for the protection of your equipment and backup of your data, and THE PROVIDER will not be liable for any damages you may suffer in connection with using, modifying, or distributing this SOFTWARE PRODUCT.

(taken from:

just have to replace “THE PROVIDER”  with your name or company name, and hopefully it should be ok.

if anyone knows about a better generic disclaimer for software, i would really appreciate a tip.

Monday, November 16, 2009

Crafting blog

I've revived my old “practical art” blog with some things I've done lately.

Lately I have been spending my time either on the White Rabbit project or in my studio, making pots and bowls.

This is the link to my crafty blog:

And here is a screen shot from the new White Rabbit version:


In this version I thought it nice to use the spring-graph by Mark Shepherd. I’m not sure its the best concept, but its lots of fun.

I’ve also been playing around with the skinning, but i haven't mastered it yet ( as one can see )

Saturday, September 12, 2009

Scala, the java of the future

The tug-o-war of software development languages gets really confusing; it’s hard to decide what language would prevail and become the new standard in application development. Why is it important? Because we are application developers, and our relevance in the business landscape is linked to the tools we are skilled in operating.

I have heard Professor Martin Odersky say that Scala is the java of the future. Although I cannot vogue for such a promise, I can certainly say that Scala is the most compelling language I have encountered.

Scala has incorporated many ideas, new and old from the other existing languages. Scala functional and object oriented. It has cool stuff like traits (aka mixins) and Actors (erlang style threads) and advanced pattern matching. Most of all, Scala is expressive and esthetic. Clean and comprehensible code means easy and efficient maintenance, and that is important.

Scala runs over the JVM. Hence Scala can call upon java functionality and vice versa. Scala also has a .net support but that is not my scope at this time. According to professor Odersky, the overall performance is not hampered and the results are usually as fast as Java. Michael Galpin posted some figures about Scala performance on his blog here and Nick Wiedenbrueck compared Scala, Java and Groovy here.

Following are some interesting links I found for getting into Scala top down:




    Monday, September 7, 2009

    The NewsWay Reports

    This is the marketing leaflet for the product I am managing for ProImage which is a subsidiary of Agfa. ProImage specializes in pre press workflows with its leading product called NewsWay. The NewsWay Reports is an enterprise solution that complements the  production workflow and facilitates executive decision making.

    nwr b-front  

           nwr b-back

    • Production events analysis
    • Graphical data visualization
    • A production data warehouse

    NewsWay Reports provides easy and quick access and analysis of accumulated production data. This enables you to tune and optimize production for maximum performance, at minimum cost.

    This powerful browser-based report generation software provides the ability to compare planned schedules with actual job completion, track deadlines, evaluate plate consumption and track waste. It allows you to review file input and output time of each NewsWay workflow process, and log user times.

    This facilitates the rapid identification of production bottlenecks and enables you to make real-time adjustments to your production.

    NewsWay Reports integrates with any workflow. The reports generated are easily exported to PDF or Excel formats, making them accessible by other people within your organization. You can even automate the creation and distribution of status reports to the executive management for “first thing in the morning”.

    The production statistics are accessible from the NewsWay Reports viewer and via any 3rd party data visualization tool, providing you with the tools you need for in depth analysis of production.

    Make the right decisions with NewsWay Reports

    NewsWay Reports is a browser based application allowing easy and quick access to accumulated production data. The data is presented in a consistent, easy to read format, enabling statistical analysis to be performed effortlessly and facilitating better decision making.

    Additionally, its advanced querying capabilities enable in-depth analysis of production data, customized to your requirements.

    High Performance Backbone

    NewsWay Reports is anchored on a specialized Reports Data Processor. The Reports Data Processor gathers production information and performance details from the machines without hampering the production flow of events. The accumulated data is reshaped to fit into a data warehouse where the production information can later be queried with simplicity and efficiency, guaranteeing high performance response to queries.

    The vast amount of information gathered in such a data warehouse enables tracking of even the minutest of trends and bottlenecks.

    Comprehensive reports

    NewsWay Reports offers three suites of reports designed to meet your budget and needs. All suites include the NewsWay Reports Data Processor and NewsWay Reports Data Warehouse.

    NewsWay Basic Reports

    Ideal for many, this solution comes complete with a basic set of reports for output statistics and analysis, workflow throughput, and reporting on historical data.

    NewsWay Enterprise Reports

    NewsWay Enterprise Reports provides an extended suite of reports that additionally include reports for enterprise transmission and waste analysis.

    NewsWay First Class Reports

    This solution is fully customizable. It provides a tailored solution to meet the needs of the most demanding print production organization.

    Credits for the leaflet are to Izzet Edige (ProImage) and Richard Hall (Media Matters).

    View in Scribd

    Thursday, August 20, 2009

    Application performance, the highways and the sideways (opinion)

    The phrase “everything should work as fast as possible” describes the wrong way for developing software. Software is a means to an aim. The aim is a functionality of a sort, just like a car or a house. When constructing a solution, one has to take performance in the context of the solution and not as an ideology.

    Imagine how a house would look like if the drive way was paved four lanes wide just for the sake of performance.

    The prioritization of performance is determined by the business requirements. A thorough analysis of the required throughput in each of the application flows reveals by itself the highways and the sideways.

    Marking the road map in such a way is not premature optimization, but the proper use of the information that the architect has from day one. The final tuning can be done at the later stages when performance issues come up. However, developers need to know how wide the road they are paving should be.

    See also “Data Buckets”.

    Tuesday, August 11, 2009

    License security in Java, don’t be the easiest target

    230420091109Java based enterprise products that are not open source have a big issue with Intellectual Property Rights license protection. The relative ease of byte code decompilation makes it easy to copy the product with the license protection disabled. One can measure the magnitude of the problem just by counting the number of products that claim to have the solution. Dongle, code encryption or machine signatures are popular techniques for license security. However, there is always that weak link, that single validation method, returning a Boolean that checks the existence of the license.

    A friend of mine once suggested that instead of striving to be perfect, it’s better to invest just enough to avoid being the easiest target. It’s easy to spot this strategy in nature. If you happen to be a wilder beast, you don’t have to be the fastest runner to survive. Just make sure there are a few others slower then you. If you are a zebra, just go stand next to the wilder beast. When I park next to a Lexus, I don’t bother to lock the doors.

    The way to implement this strategy over software licensing is by understanding the domain of product hacking and license cracking for enterprise installed software. The goal would be to introduce enough complications in order just to become “not the easiest target”.

    Understanding the domain

    A business would choose the illegal options of requesting a hacked product if they are far enough or obscure enough from the hands of the IPR owner and only if the cost of the cracked software is considerably lower than the license fee.

    The basic interaction with a hacker is; “I give you money and you give me a working version of the product without the license verification part”. Since any single method is prone to be broken eventually, sooner or later the hacker will deliver the “goods” as promised; a cracked version of the product.

    The weakness in the hacker’s ‘user story’ is that although the hacker is a master in the technology he employs, he knows nothing about the domain of the product. Thus, obscuring and masking the validation for the license into multiple points in the business logic is an advantage for the IPR owner.

    Create an illusion

    By all means, use all the conventional protections; obfuscate the code so it won’t be a picnic after JAD (Java De-compiler). Encrypt and put signatures on the classes that handle the license, and so on. Make the hacker sweat a little, in his comfort zone, before he delivers a hacked version that is not really cracked.

    Take no immediate action

    The easiest way for a hacker to pinpoint the license validation code, is by the actions that are taken by your software when the license is invalidated. The easy points to spot may be log messages, explicitly stating the lack of license or the point where the system freezes up, that’s where the hacker starts his reverse engineering. Instead, do the unexpected. Randomization of decision (heads to freeze, tails to grace), Postponing the enforcement action, and reusing the validation variables for other purposes would make the hacking task so much harder.

    Dead code

    Implanting license validation code is business logic that is called upon in very specific conditions is probably the best cloaking method. The way to reveal license validations that are hidden in obscured parts of code would be to run every possible scenario. The hacker would have a learning curve to adjust to the unfamiliar domain, in the mean while the business that bought the hacking would suffer more than one production cycle that goes bad on account of licensing.

    No Code Reuse please

    As code reuse makes code more readable, the way to secure the license validation would be the exact opposite. Repeat the functionality of the validation without referencing or even repeating the code.

    In summary

    The bottom line is money. The price of an acquisition of a legitimate license compared to the price and risks involved in deploying a hacked version. All you have to do is break the hacker’s promise of his delivery. Instability and unpredictability in a hacked product, and an expensive hacking process can bring you just far enough from being the easiest target.

    Monday, August 3, 2009

    Suppress mouse events on child components - Flex3

    A college of mine showed me that in order to suppress mouse events on children components is by the obscure property of mouseChildren = false;

    I use it when i want to catch a mouse event on a container, regardless of the component within that container that was exactly under the mouse.

    There is no way under the heavens to guess this is the functionality of a property with such a name.

    Thanks Liran.

    Saturday, July 18, 2009

    Down the rabbits hole with neo4j (Part 1)

    While standing on one foot, I would say that the neo4j project is an implementation for an embedded java graph database. Unlike the traditional relation databases, the graph database has no table structure for storing its data. Instead it uses a more dynamic (unstructured) network composition. Using mathematical terminology, a network of nodes is called a graph.

    I’ll put my other foot down now.

    Applications use an object data structure in order to describe and abstract the business domain they address. Stitching together the object oriented structure onto the table oriented data structure of a relational database is a major part of the time consumed in server side programming. This price has been, and still is willingly paid because of the absolute reliability of the traditional relational data structures.

    Truth to be said, there are implementations that make much more sense when persisted using tables. However, I would dare to argue that when querying becomes complex and cumbersome, it just might be a sign that the persisting method isn’t appropriate.

    The unstructured graph database leads the field when it comes to implementing an ad-hoc structure that is prone to runtime changes.

    I decided to give it a try after watching Emil Efirem’s demonstration on the web and I’m sharing my experience as I played around with three aspects of use:

    • Part 1: Converting plain java data objects into persisted neo nodes
    • Part 2: Managing lookups
    • Part 3: Experimenting With Transactions

    Setting up a neo4j environment

    The neo4j jars are available on:, under which there are two packages;

    • The apoc package includes some extra utility jars, i.e. the indexing jar, the shell jar and some examples
    • The kernel package includes the bare minimum, i.e. neo and jta jars

    My example project’s source code is also available here, compatible for eclipse and intelliJ.

    Part 1: Converting POJO into neo

    In my mind’s eye I saw a plain old java data objects that would define my domain data structure. These objects would be passed to some kind of neo facilitating mechanism that would take care of the persistence for me.

    Using annotations

    Java annotations are a wonderful tool; I use Annotations to load metadata on class elements, when the metadata is not intended to be part of the primary business course of the class.

    In this example, I have a type representing a person that has a couple of properties and lists of friends and foes. I annotated the fields with the hinting I needed to ease the conversion into neo.

    Note that the conversion is handled at runtime so the annotation retention policy is set appropriately.

    Here is the annotation declaration:

    public @interface Persistance {
      Type type() default Type.Property;
      Peers relationType() default Peers.NA;
      public static enum Type {
        Property, Peer

    The annotated POJO, representing my person data is as follows:
    public enum Peers implements RelationshipType {
      Friend, Foe, NA
    public class Person {
      @Persistance(type = Persistance.Type.Property)
      public String name;

      @Persistance(type = Persistance.Type.Property)
      public String nickname;

      @Persistance(type = Persistance.Type.Peer, relationType = Peers.Friend)
      public List<Person> friends;

      @Persistance(type = Persistance.Type.Peer, relationType = Peers.Foe)
      public List<Person> foes;
    And so on…

    The conversion to the neo nodes

    The neo world speaks in two basic terms, the node and the relation. Each can bear its own properties.

    The nodes have no typing, which I think is a shame, but relationships do require typing. I went along with the recommendations and defined my relationship as an enum.

    The snippet for the relationship type:
    public enum Peers implements RelationshipType {
      Friend, Foe, NA
    The persistence mechanism is a recursive breakdown of the POJO graph (the person and all this friends etc.) according to the hints I get on the annotations.

    Within a neo transaction I call the converting method by passing the object I want to persist onto it:

        Transaction tx = neo.beginTx();
        Node node = null;
        try {
          node = objectToNode(object, null);
          return node.getId();
        } catch (Exception e) {
        } finally {

    The method objectToNode(..) is the recursive part and I load it with a stack map to prevent the recursive calls from looping forever on objects that were already processed.

        if (stack == null) {
          stack = new HashMap<Object, Node>();
        } else if (stack.containsKey(object)) {
          return stack.get(object);
        Node node = neo.createNode();
        stack.put(object, node);
        Class cls = object.getClass();
        Field[] fields = cls.getDeclaredFields();
        for (Field field : fields) {
          // only persisting the fields that have my special annotation
          if (field.isAnnotationPresent(Persistance.class)) {
            processFieldData(node, object, field, stack);
        return node;

    The method processFieldData(..) is where the annotations are processed. Properties are loaded on the nodes and hints for relationships invoke the recursive call that handles the linked objects as independent nodes.

          if (annotation instanceof Persistance) {
            Persistance p = (Persistance) annotation;
            log("processing annotation: " + p.type() + " "
                + p.relationType());
            if (p.type().equals(Persistance.Type.Property)) {
              // TODO: verify that the property is actually a primitive
              Object value = field.get(object);
              log("setting property: " + field.getName() + " " + value);
              // here is a neo setting of a property
              node.setProperty(field.getName(), value);
            } else if (p.type().equals(Persistance.Type.Peer)) {
              Object value = field.get(object);
              log("setting peer: " + field.getName() + " " + value);
              if (value instanceof List) {
                for (Object item : (List) value) {
                  // here is another neo bit, where a new Node is
                  // created by recursively calling on the converting
                  // method with the child object
                  // after the new node is handed back, i create the
                  // relationship between the two
                  Node otherNode = objectToNode(item, stack);
                  node.createRelationshipTo(otherNode, p

    Download the complete java source code

    Wednesday, July 1, 2009

    Factory Implementation in Flex AS3

    The closest thing i could find for implementing a simple factory in Flex is by using the getDefinitionByName function.


    import flash.utils.getDefinitionByName;
    private static const basePath:String = "..path to implementation..";
    private var a:SomeClassImp;
    private var b:OtherClassImp;
    public static function getRendrer(className:String):Object{
    var classFullName:String = basePath+ className;
    var objClass:Class = getDefinitionByName( classFullName ) as Class;
    return new objClass();

    Note: The classes that the factory is suppose to generate in run time have to be explicitly stated as var’s in the factory class. this is for the compiler to know to add them into the compilation product.

    Sunday, June 14, 2009

    Bazooka Developer

    It is so told, that there should come a time for every developer, when he be faced with the challenge of coding in the sky. A situation like this happens when a client is faced with a critical problem and needs an immediate solution. The solution provider decides to give it a go, sends out a repair man, for a fast, cowboy style solution, and cashes in the check.

    When my turn came, I was handed the assignment along with the flight tickets and a brief rundown of the functionality I had to deliver. The functionality took about an hour for him to describe.

    This kind of positive thinking often reminds me of the ‘Bazooka Joe’ bubblegum fortune, telling me that by the age of 21 I would probably land on the moon.

    Keeping that same positive thinking in mind, I realized it would be a nice opportunity for setting my agile development skills to the test. I preach agility all the time. I preach it to my colleges and to my developers. I even preach it to my boss. There was no way I was about to turn ‘cowboy’ on all that.

    The entire time frame for this task was less than a standard agile heart beat, but I stuck with the principles, and used them to guide me.

    My first task was to deliver functionality. So I set to inquire whatever I could about the requirements. There was the way my boss envisioned it and there were some correspondence with the customer that preceded the request. I swept through the mail exchanges, to find out how the customer wanted it to work and how it ought to be fitted in the surrounding landscape of the other systems. I came out with a detailed feature list, reflecting the functionality requirements, an outline sketch of the interfaces and a high level cubistic design with a short list of would be obstacles (i.e. outstanding issues yet to be resolved).

    Meeting the bare minimum to enable testability: I wrote a mock up interface invoker (two actually, that know how to talk to each other in a session like manner). Just like in the real world, my module was going to be ‘stuck’ right in between the two mockups, doing so I could have a convenient testing envelope where I can gradually grow my functionality.

    I felt I was ready for some hard core coding.

    Fortune: you will not get much sleep the next few nights.

    I developed in short cycles. I used my short list of suspected pickles to define the steps for development. After coding a new solution that was suppose to address a particular issue, I ran the module against the testing facilities, with all the scenarios I had so far, and a new scenario that would test the solution. This got me through debugging and I made sure that I was not disrupting previously developed features (i.e. regression testing).

    I know, I know, I was sprinting or whatever. In any case it kept me focused on my goals and constantly in check with working functionality.

    I sprinted at the office, I sprinted at home, I sprinted at night, in the airport and on the airplane.

    It so happened that by the time I got to the site, I was far from done, but I did have a prototype that was solid enough and being as it was, I had it installed on one of the development environment there.

    I made use of the customer. The customer in this case was a respectable American corporate. I met there with the IT team that held everything together. Right at the beginning, they brought into my attention that they had four sets of environments. This was an important point to note since there were differences in application versions between them. The version differences meant that every time I got environmentally promoted, I had to adjust to the changes in protocol and behavior.

    Fortune: an overnight success is the result of years of preparations

    At this stage, I could have my testing cycles run up against real applications and not the mockup applications I prepared myself. The people from the IT team helped me learn about the extreme scenarios and together we were able to devise a meticulous testing plan. Together we defined the scope for load testing.

    During the working hours, I ran tests on the various environments that were available and spent the nights at the hotel for code fixing and optimization. This was not an easy time for me. By the end of the first week we have conducted a few cycles of testing and amendment, and we were already deep into the load testing.

    Fortune: the love you give is equal to the love you get

    I was also able to cater for some extra requests for changes. These were minor modifications that had to do with logging and configuration options, but it made customer happy. The help I got from the customer staff I met there was absolutely essential in making the delivery and having the module go into production.

    A month after I flew back, my module was set into production. Of course there were some more bugs identified and some new feature requests as well. But all in all, the customer was satisfied.

    Fortune: don’t let it go up to your head