Beta test went well, highlighting only minor problems on the release.
So I am proud to announce AgileSites 1.0.0 is released.
The release comes with an installation video you can watch below or on you tube.
After one year of work, and after successfully using the framework on real world customer, I am happy to announce the release 1.0.0 beta1 of AgileSites.
AgileSites is a revolutionary framework (for Fatwire and Sites) that brings a lot of features, mostly oriented to allowing modern Agile development.
I seriously believe that Fatwire and Sites development will never be the same.
Most important the framework is Open Source under the commercial friendly Apache License 2.0
But instead of talking of it, watch the video, then consult documentation in its website: www.agilesites.org
It is hard to believe that in 2012 we still we have problems of mixing html and code. Those problems are supposed to have been solved since many years, but unfortunately they are still around.
In an ideal world, a web designer is also able to code. In a world even better, a web designer is also able to write jsp templates and knows fatwire tags.
The problem with Fatwire/WCS is that to render the mockup code, you have to add Fatwire coding to it. So , because WCS is still a JSP based system, you have to add logic to extract the content model and put it in place in the HTML mockup. That would be fine if... it were to be done only once.
But in reality, HTML code undergoes a number of iterations. Web designer will update the mockup and return it to server side developers. Those developers will have to use the new HTML to update their templates code, that is now an heavily modified version, with added a lot logic in Java and JSP tags.
This is a big problem, because it is not easy. The usual process is figuring out what changed in the mockup from the latest version used to build templates, then go through the code to apply those modifications. Hoping that nothing will break.
When the HTML is heavily modified, starting from scratch is not unusual. In short, the process of updating templates when the HTML mockup changes is a real pain in the ass.
What is really needed.
The principle of separating presentation code (HTML) from logic is almost as old as the web itself. There are millions of solutions arounds for this, yet those solutions did not yet reach WCS in his core. We strongly need to implement for real this separations.
Furthermore, since HTML mockups are going to change ofter, we should be able to leave the mockup in his original form, in order to be easily updatable. Rendering logic should be applicable directly to HTML in his original form, without modifications.
One "feature" of WCS, being a CMS, is deployment of code done in the same way as content. Code is treated as content and managed in the same way. We will see that this fact can create many problems.
A content editor doing is job changes some content, and then he approves it for publishing. WCS is smart enough to detect dependent content and requires the approval of related content to publish it in single publishing session.
This is great for a web site, where you only have to update a single content asset to update all the web pages referring to that content in the web site. Furthermore the publishing process is smart enough to invalidate only parts of the cache affected by the changed content.
Developers should work in the same way: in the development server, a developer can change the template code, then approve it and finally publish it. Code should then go from development servers to staging servers and finally to live servers.
Let's put aside for now that having a single development server for multiple developers is a problem in itself (I will say more of this later), let's give a look at what developers really do and why this way of developing the code does not work as good as it should.
How developers REALLY develop...
There is a great variation in development procedures. The more common, even if now there better tools are available, many are still using the aging ContentServer Explorer (now Sites Explorer) editing directly JSP code stored in the ElementCatalog table.
Unfortunately, when you edit a JSP, the associated Template or CSElement is not aware that you changed the code with CSExplorer. So to make sure the "code publishing" mechanism work, you have to manually edit and save the Template or CSElement corresponding to the JSP you edited, then approve it and finally publish it.
Being a manual process, way too often happens that someone forget either the edit/save or the approval of a changed template.
Also the propagation of the code from staging to live requires a re-approval of the templates. Although you can theoretically could just do a bulk approve, many people are scared of republishing everything. So what usually happens is that all the changed templates are manually approved and published, using manually kept release notes.
Since usually who deployed templates from development to staging is a different person from whom developed them, a floating document with the list of the changed element, or worse a flow of random email is used to propagate those informations.
At some point someone makes a mistake, forget to approve a template, distribute a list with the wrong templates ... and problems not existing in development starts to appear in staging or in production, randomly.
When different developers are involved, or there is a turnaround in the editorial team, it happens way to often that you no more know what is deployed in which server. I have seen people periodically spending days comparing each template in different server just to figure out what went wrong and the origin of a bug.
But wait... there is more
Actually things can go much worse than this.
Another problem happens very often when developers are forced to develop on a system disconnected from the staging/deliver chain.
This may happen for many reasons, the more common is some brain-damaged security policy, but there can are other more practical reasons, for example: "the connection from UK to India is too slow and we had to deploy a local development server".
The current solution to this problem is CSDT but to be honest, it is not yet very widely used. People are very creative in solving the problem of distributing their development work. Some uses catalog mover, but I have seen people distributing their work as a database dump and even manually copying and pasting the code in the Fatwire Advance Interface.
Needless to say, this is aggravating the deployment hell already described in the previous paragraph.
But the worse situation, that I have seen too, is when developers are developing in their development server, then some other people fix some issues (usually in HTML) editing directly templates in staging, and at the same time some urgent issues are also fixed manually editing templates directly on the live server. The result as you can imagine is a total unmanageable mess. And unfortunately, even is is an extreme case, it happens.
What you really need
Java has a concept of deployment unit: it is called the "jar" file. Fatwire/WCS is one of the few Java places where code is not deployed in jars, but it is instead delivered as separate templates deployed through publishing.
What is really needed is that all the code for a site can be distribuited as single JAR fil, that can be easily deployed, tracked, compared, distribuited and versioned.
All the deployment hell I described would go away if instead of having a bunch of files, you have a jar. The jar can be built by developers, tested separately with bug report and fixed referring to a specific build, delivered to destination and deployed just copying the file and eventually running some schema update procedure.
Jars have a shortcoming, of course. Usually they require the restart of the application server to be recognized. This in a live environment is not usually acceptable. Nonetheless, it is not always true that deploying a new jar requires the application server restart. There are plenty of hot-reloading Java systems. Just to mention one, hot reloading of jars in JBoss. So it is possible a system where a site is deployed in jars that are deployed without restarting the application server (and indeed, I already implemented such a system).
I will continue to list WCS/Fatwire development problems in the next few posts before introducing my solution to those problems. Stay tuned.
I am close to release a new open source project that tries to fix many of the common problems in Fatwire and WCS development.
But before the release, I want to discuss those problems, since fixing them is the main motivations underlying my development effort.
Actually, many of the features of the framework I am doing cannot be understood unless you know which real world problems they try to solve.
This is not a rant. I also have a solution to those problems and it will follow in the next few weeks.
JSP are evil
The biggest issue in Fatwire/WCS in my view is that you are stuck with JSP (or worse, with the obsolete, very limited and almost undocumented "Fatwire XML").
JSP are designed only to be a quick way of rendering a dynamic HTML page using Java code. Something that is 95% html and only a little bit of code. They are no way meant in no way to be a complete tool for doing generic Java coding.
They provide the flexibility of immediate availability (since they are recompiled and reloaded on the fly), and they provide the "markup first" approach that is useful when you need to render a lot of HTML and only a small part need to be coded and rendered dynamically.
Indeed this JSP flexibility comes with a lot of limitation, while the coding in Fatwire development tend to be pretty complex, so complex that you end up writing big JSP so full of code that may you have difficulties finding the HTML.
Here some of the limitations of JSP. First and before all, because a JSP is a single method of a single class, you generally are not supposed to define methods and classes. You actually can define classes and methods, using the "<%!" syntax.
Hovewer, since it is not the way JSP are supposed to be used, you cannot reuse any defined method in another JSP . This is even worse when you also define a class. You can create classes inside a JSP but you cannot reuse that class in another JSP.
The only way of having code shared between different JSP is to create a separate jar with all the code and deploy in the application server Coding such a jar is relatively awkward, because you have to build it, deploy and restart the application server for every change. So usually this is done rarely.
For this reason, building libraries of code is not normally done in Java (as it should). Instead, the common practice is to create a library of "elements" called by JSP.
The problem is that a CSElement is not really meant to be a library doing complex things, it is meant as a medium to generate repeatable rendering of common elements.
The semantics of calling CSElements, and using them as a library is, frankly, disgusting. There is no way return value from a CSElement , so you normally use a global environment and side effects (altering the value of a variable value in a shared environment).
The JSP "language" does not offer any enforcement to using a CSElement as a library call, so everything is left to convention. You need to document clearly which variables are changed to see what is returned. This practice is error prone, hard to read and even harder to maintain.
Also it happen often that a CSElement "pollutes" the calling enviroment, so you need to use pushvar and popvar to preserve the environment. And this makes the whole procedure even more disgusting, unreadable and produces really bad code, where a lot of complexity is there just to move around variables, protect them and read side effects.
Last but not least, the invoking CSElements is really verbose and typing an invocation method is often so long that you copy and paste code. Long code doing little is also very hard to read.
How to solve JSP problems
1. What really is needed is a way to code in full, plain, clean Java, not in JSP. Classes offers all the power needed to build complex programs, and a JSP are a simplified AND limited form of Java classes. Using Java will immediately give you the ability to write methods and classes, keeping them totally reusable.
2. However you need to retain the ability of seeing immediately the result of a change because restarting the application server at each change is usually not really practical: it is so slow that developers to avoid it will prefer to push tons of code in the JSP. A solution could be "JRebel" but since it is expensive for some teams (most notably indian teams) buying it can be a problem. So that solution should be cheap.
3. Coding in Java you will also want to avoid ton of out.println("...") filled of "\" to escape quotes just to generate HTML. Probably you may not not the "HTML first" approach of JSP and you may prefer the "code first" of Java, but you still want an easy way of generating HTML.
4. Last but not least, JSP offers some well defined tag libraries for rendering the content model. Since the equivalent Fatwire Java API (the Asset API) is not even close to the quality of tag libraries, you need some efficient way of invoking tag directly from Java code.
We have not finished, yet.
Actually we have just started, a lot more will come. So please wait, there are more problems to discuss in the next posts. And the solution will follow, it is a promise. So stay tuned.
In the projects I have worked in the latest two years, a pattern emerged: developers were developing using libraries of code but not using directly fatwire tags. So, instead of using fatwire tags, they were calling all the time some "elements".
That is puzzling. Why they do this way? Why they use those libraries instead of just LEARNING THE CODING OF THE PRODUCT? There is no real advantage of using those libraries. A lot of code is needed to pass the parameters, other code is required to read the result, and overall it is more complex than just using the fatwire tags in the first place. What is worse, fatwire tags are documented. Those libraries are usually NOT.
I suspect this is a common pattern because implementers follow directions from someone that is providing the library. However, the library that they use is very often (I would say always) much worse than using directly the fatwire API. This is because Fatwire does not really provide features to write libraries in templates. They should code custom tags, instead. Something very rarely done.
I know that Fatwire API is somewhat confusing; for example having to extract each attribute before rendering it looks to be too much effort. But this is not an excuse for avoiding to learn it.
For example to render an attribute of an asset, an assetset:getmultiplevalues and then a ics:listget it is all that is needed to render an attribute.
It is shorter than calling the "getpageattributes" call I have seen too many times.
Real code I saw yesterday do:
- collect a number of parameters
- calling the element that extracts ALL the attributes and store in an hashmap
- retrieve the hashmap
- extract the values in java variables
- sometimes bring back those variables in ics variables
- add a lot of checks for null values
But using a library can generate be much worse results than this.
I have see in a project developers calling all the time a library that was extracting the attributes filtered by date. EVEN FOR NON DATE SENSITIVE FILTERED ATTRIBUTES.
So the coders, instead of just extracting the attributes they were needing, did some dirty trick with variables to simulate a different time range and disable the filtering when it was not needed. An extreme case of total ignorance of the enviroment they are working in. Code 10 times more complex totally unreadable and filled with hack and tricks.
So I feel to give a big warning to all the Fatwire Customers (and I know many of them read my blog and listen to me): check that the developers knows Fatwire coding. It is not that hard. Below there is a list of questions developers will have to know the answers (without the answers... so developers that will try to be prepared for my test will have to learn).
Unfortunately many indians and chinese companies have this bad habit: since they cannot find the developers with the proper skills, they throw in the team generic java and jsp developers, and instead of providing training, they put some team leader that provides a library. Then they (regularly) lose the team leader and the developers are left hacking the library without never learning the enviroment.
Simple Test: do you know enough of fatwire for coding?
- What is an IList?
- How do you extract fields os an IList?
- How do you loop an IList?
- How do you load a basic asset?
- How do you extract fields from a basic asset?
- How do you load a flex asset?
- How do you load attributes of flex assets?
- How do you read a single value?
- How do you read multiple values of an attribute?
- What you get when an attribute is a blob?
- What you get when an attribute is an asset?
- What is the impact on the cache using a calltemplate?
- What is the impact on the cache using searches with search state?
I recently had to perform a large migration and I choose to use to use CSDT. Well, this may be contradictory with some post I did in the past, where I stated I was don't like CSDT. Actually, what I dislike is the CSDT eclipse plugin. While it has some usefulness, it is not that great andI currently just use Eclipse in a local jumpstart using this trick for editing.
But CSDT command line is pretty useful when used as an export/import tool. It could be better however. Using it, I learned a few lessons and I wrote a couple of scripts to make it easier to use. I stored them in my FatGoodies repository, here. You can downloading the scripts with git or just display them raw and save them locally (there are just 3 files).
CSDT jars are not included though. Being proprietary software, I cannot distribute it. You have to take the relevant jars from your Fatwire/WCS installation and copy them in the lib subdirectory where you placed the scripts. Read the README for the list of the required jars.
There are 2 scripts: a launcher, and an exporter. Scripts are written for the BASH shell and works out of the box in common Unixes (Linux / Mac OSX). For Windows you need to install CygWin or another unix like environment with Bash like the one that comes with Git for Windows. I actually use this last one, not Cygwin, but I do not expect problems with Cygwin
The first script is the launcher. It actually comes in 2 flavors: csdt-unix.sh for Unix and and csdt-win.sh for windows. The script does the dirty work of building the classpath you need to lanunch csdt and provide all the numerous parameter that are making a pain to use it.
However the intended use of the launcher is from a wrapper script. You should copy the wrapper.sh in your own file, and edit it filling in the parameter: location of your content server, username, password and site. Defaults are good for a local jumpstart kit.
Your wrapper will the invoke the launcher, and then working with CSDT becomes easy. The launcher is separated from the wrapper because you can have multiple wrappers. I actually make copies of the wrapper script for each server I want to use: for example staging.sh, delivery.sh and so on.
The wrapper script synopsis is then just: ./wrapper.sh [<selection>] [<command>]
If you run it without argument, you will get the list of all assets in the configured server.
The first argument is the <selection> of the assets and it defaults to @ALL_ASSETS. Its syntax is documented in the Content Server developer tools manual. Basically, AssetType:AssetId select a single asset, AssetType select all the assets of a given type, @ALL_ASSETS select all the assets and @ALL_NONASSETS select the configurations. It
The second argument is the csdt command to perform, one of listcs, listds, export, import. You can now do easily stuff like:
- List all the assets:
- List all the CSElements (or template or whatever) from staging:
- List all the non-assets in "ds" (with fw_unid) on live:
./delivery.sh @NON_ASSETS listds
- Export only the CSElements from staging:
./wrapper.sh CSElements export
- Import all the non-assets in live:
./import.sh @NON_ASSETS import
A more robust export
The previous script is good enough to launch csdt. However, when performing a full export of a site, there is a problem. CSDT is somewhat fragile: if there is an inconsistency in the content model , the export fails.
There are many possible incosistencies: for example, an asset using a deleted attribute, or a locked asset or something shared with another site. Many inconsistencies that are not a big issue and survive to publishing and don't impact site rendering are fatal for CSDT.
The problem is particularly evident when you try to export all with @ALL_ASSETS, you may to wait a lot of time for answer, then finally the export stops with an exception. You have to fix the issue and then run again the script. When you have thousand of assets, the loop "export - fix - try again" can be unacceptably slow.
The solution is, sadly, exporting all the but one by one, collect the errors and then fix all of them at the end. This is what the second script does.
The synopsis is: ./export.sh <wrapper> [<selection>]
The script requires as the first argument a wrapper script, one of those created in the first part of the post, providing the parameters to connect to a specific CSDT instance. As said in the previous section, you create one copying the wrapper.sh into another file and then editing it providing the appropriate parameters.
The <selection> is by default @ALL_ASSETS but you can use also something like "Template" or "CSElement" to select only one specific asset type or even "CSElement:123456789" to select a single asset.
The export script will export each asset in the selection separately.
The output will be logged in a file called out/<wrapper>/<asset-type>/<asset-id>
The script has 2 advantages: it won't stop if there is an error, and it is incremental .
While csdt alone will bomb if there is any error and stop processing, the export script will simply log the error and continue. However you can stop the processing at any time, and then resume: existing logs will be used as markers, and already extracted asset will not be extracted again.
Once you have all the logs, you can search for errors. The simple command
grep -r out/staging Exception
will return a list of assets that logged an exception, something like this
out/staging/CSElement/1327351719222:Exception out/staging/CSElement/1327351719637:Exception out/staging/CSElement/1327351720069:Exception
You can then inspect the export that failed the export, fix it and then export them again. Since by default already exported assets are skipped you have to add "-f" to the script to force a re-export. So for export again the first asset of the previous list you may use
./export.sh staging.sh CSElement:1327351719222 -f
How to use a Windows JDK on Unix (Linux/Mac) and ViceVersa
I am a Mac user and I usually deploy my work on Linux servers, so I work normally and comfortably in an unix environment, both for development and delivery. Because of Java portability and because paths on Mac and Linux are the same, I do not have any problem moving my JumpStartKit or local install between Mac and Linuux: just copying files and placing them in the same location is enough (paths are harcoded in a Fatwire installation so the paths must be the same).
However one day I found myself, with a JSK built for my Mac I had to run in a Windows environment, and with a JSK built by someone else in a windows environment that I had to run in my Mac (Unix) environment.
I have previously "converted" those JSK using string replacement tricks, but it is definitely an error prone and annoying task. Furthermore, sometimes I have to make some changes, using Mac, and then send it back to a windows user.
So I started to think if replacing strings in JSK configuration files and database to run it on another environment can be avoided. After some thinking and experimentation, the answer is a full YES. With a couple of simple os level tricks you don't have to change anything.
How Run an Unix JSK under WIndows
This is easy.
The basic finding is that Java treats slashes and backslashes in the same way.
So a path like /Developer/Fatwire/jsk (a unix absolute path) it is actually interpreted in windows as "\Developer\Fatwire\jsk", that is an absolute path too, except that is relative to the current drive.
So, to run an Unix JSK in Windows, all is needed is placing it in the corresponding directory it would be under Unix, then change the current drive when you start it to the one where the JSK is installed.
To be sure of the current drive, you do better to start JSK command line, as explained below, instead of relying on the launcher provided by the JSK itself. Here is the procedure
- Open the Command prompt
- Change to C: if I placed the JSK in C:/Developer/Fatwire/JSK
- Go to C:/Developer/Fatwire/JSK/App_Serverver/apache-tomcat-6.0.30/bin
- Copy catalina.bat.org to catalina2.bat
- Start it from the command prompt using catalina2.bat run
Please note you may need to tweak the catalina.bat adding more memory, more perm gen space and so on. For a hint of the changes needed, give a look in the original catalina.ba,t looking for JAVA_OPTS and CATALINA_OPTS.
How Run a Windows JSK under Unix
It is slightly more complex to run a Windows JSK on Unix without changing strings in the JSK.
The problem here is that a Windows path is usually something like "c:\Fatwire\JSK\7.6.2\". Now, Java does not care of the slashes. Reverse slashes are translated in forward slashed automatically.
The problem is that a path starting with C: is an absolute path in Windows but… a relative path on Unix! So trying to start JSK simply won't work unless you do something fix this confusion.
So my technique is installing the JSK in the same directory in the filesystem without the drive name; in the example before, in /Fatwire/JSK/7.6.2
Then I open the terminal, I go the tomcat bin directory (something like /Fatwire/JSK/7.6.2/App_Server/apache-tomcat-6.0.32/bin), do the trick described before for Windows to start it (I copy catalina.sh.org to catalina2.sh, then I tweak the OPTS taking the parameters from the catalina.sh).
Now, before starting tomcat I also do this :
ln -sf / C:
this way you create a symbolic link named "C:" (that is legal file name on unix) pointing to the root folder.
Now you can start tomcat with ./catalina2.sh run
This way, the current directory is the bin directory of tomcat. Every path is relative to this directory. This directory normally won't change, at least I am not aware of any internal change directory inside Fatwire.
So any reference to paths will start from C: that will point to the root folder. Since you have placed the JSK in the same position as it was in Windows, now every path now is resolved correctly.
That is all. Enjoy now working in a team without having either to switch to windows or (worse) run the JSK in an emulator(a real pain I do not advise to anyone unless you have really a lot of ram)
Almost in all the projects I worked this year, I was plagued with legacy Fatwire versions that are still around. An outdated version of the software was often a major problem on all the development and the issues I was called to solve. And the worse is that customers often did not feel any need to upgrade.
I discussed so many times the reasons to upgrade at least to 7.6 that... I decide to write this blog post, both to share the wisdom with other, and both as a reminder for myself to shorten future discussions on this subject .
Fatwire 7.6 is still largely compatible with 7.x and 6.x: users won't notice any significant difference, so (unless you have a heavily customised UI) upgrading to 7.6 is mostly painless (except for CAS, see below). However you get a number of significant improvements pretty useful for new development . You can see here and here the release notes document for 7.6 and 7.5, but in this post I am trying to better explain what they really means.
Upgrading to 11g probably is the better choice, as I discussed in this blog post, however it somewhat changes the editorial team experience. In this post instead I explain the technical reasons for upgrading to at least to 7.6.
Number one reason to switch to 7.6 is CSDT. While I was critical of CSDT in the past (because I do not like the eclipse plugin - I still prefer either Fatclipse or my homegrown alternative fatproject) there is a feature in CSDT (that is similar to my own FatProject code) that is actually working well and it is a much complete solution than FatProject : the import/export feature.
CSDT lets you export an entire web site in XML format, including configurations and content model, and then reimport it in another instance.
Anyone working with Fatwire knows the pain having of multiple developers working on the same server, using ContentServer Explorer to edit pages, and how limited as a version control system revision tracking is. Modern agile development prescribes a continuos integration server. Furthermore, publishing is a tool for deploying content, but it is not great to deploy code.
Just to make a simple example, code has relations (a piece of code to work may require a specific version of another piece of code), but publishing just doesn't capture this relation, while version control systems does. So publishing template by template is very error prone and lead to inconsistencies between environments. Furthermore code is usually dependent on a specific content model, so it must be kept together.
CSDT gives finally developers freedom from those constraints. Since you can import and export a whole site, you can export it from a shared development environment then import in a JumpStart and use it for development. Furthermore, you can revision control the csdt export , sharing and merging it with other developers. And finally deploy it to the final, production server.
So definitely, CSDT (Content Server Development Tool) is the definitive tool to make Fatwire Development acceptable for a professional developers, used to work in their machine for development, accustomed to perform merging of others work, revision control the code as a whole, creating automated builds and running tests against the builds.
Incache was introduced with Fatwire 7.5. What is InCache? It is basically just ehcache, the underlying cache used with Hibernate and many other project around. It improves the traditional fatwire cache offering a number of interesting features. From the release notes, InCache, as ehcache, is:
- Scalable and High performance
- No single node needs a complete view of cache
- Communication via dependencies
- No shared disk
Fatwire 7.5 introduces also a new API that replaces the obsolete SOAP bases web services API. This means... a lot. This Rest API is pretty verbose, but is it finally allows interaction with the server in XML and JSON, and not only through Form POST.. Using the rest API your web site can be written not only as a static site, but also as an HTML5 fronted calling backend restful webservices, a design becoming increasingly common. I would say that is the current state of the art since all the new sites includes (and uses) jQuery...
So, if you plan to advance your web site to implement some ajax call using jQuery or similar, of better you want to have code that can also update some assets and not only read them, definitely having the Rest API deployed is a must.
Single Sign On (CAS)
Last but not least
I am aware at least of one important security hole still present in 7.5 that has been fixed in 7.6. I prefer not to tell the details in a public blog post, but I strongly suggest to check release notes of 7.6... or just upgrade...
If you want
- develop in a healthy, isolated, version controlled development environment
- need caching options for complex uncached templates
- need to scale your cache up to the infinity
- leverage ajax using jquery
- fix some security bugs
- AND you are NOT SCARED of fixing DSN and PROXY configuration because of the introduction of CAS
then definitely go and upgrade .
My first code drop of the ScalaWCS is on GitHub.
It is informally 0.1 (not even tagged as such actually), but it is not really yet usable because of the lack of a Scala API, and the documentation is still very rough.
So please don't expect to go there and use it, please give a look and let me know what you think if you like.
However some planned key features are already there. Here there are a few:
Dispatcher to Scala is in place
The basic idea is that every template will be replaced by a simple Dispatcher with this body:
This body will invoke a corresponding Scala class.
For a CSElement, the class will be in app.CSElement, for a template in app.Template. Then the name of the class is Type.Name, where Type is the name of the type (or Typeless if no type)
CSElement swWrapper will invoke class app.CSElement.swWrapper.
Template Typeless swLayout will invoke class app.Template.Typeless.swLayout.
Template swBody for Page will invoke class app.Template.Page.swBody.
Hot code loading is working.
You have to deploy a stub in WebCenter Sites, then all the code is deployed in a jar that is dynamically reloaded when you rebuild it. Combined with the continous packaging of SBT, you get the effect "code" then "reload the page" to see the results.
Tag API is wrapped in Scala
All the tags can be called in Scala: I generated a wrapper for each of them from the TLD so you can call a tag as simply as you would invoke a method.
What is left to do?
A lot. Here is the plan for 0.2
- integrate csdt so you can invoke it from sbt
- implement the planned Scala API
- create a reference site
- create a Mock for ICS so you can unit test templates
- integrate catalogmover in sbt
- wrap url assemblers in Scala
- wrap filters in scala