JGuru https://jguru.fi When you need a guru Thu, 28 May 2020 01:23:46 +0000 en-US hourly 1 https://jguru.fi/wp-content/uploads/2015/01/javaguru_icon-54c89537v1_site_icon-32x32.png JGuru https://jguru.fi 32 32 83852845 How to get Tomcat to see HTTPS when it’s terminated elsewhere https://jguru.fi/get-tomcat-see-https-terminated-elsewhere.html?utm_source=rss&utm_medium=rss&utm_campaign=get-tomcat-see-https-terminated-elsewhere https://jguru.fi/get-tomcat-see-https-terminated-elsewhere.html#respond Sat, 07 Oct 2017 09:36:10 +0000 https://jguru.fi/?p=431

It’s very common to terminate HTTPS (TLS) at higher up in your server stack but you still need your webapp running in tomcat to generate the urls using https even though tomcat is called with http internally in your network. This seems to be a very common problem that I keep seeing year after year so this short article will show you how to accomplish that and how to test it’s working.

In this diagram https is terminated at firewall but that could as well be loadbalancer or even http server like Nginx or Apache. For the test setup I’m actually using Nginx and for instructions on how to setup HTTPS with Nginx check out my post on setting up Nginx with Let’s Encrypt. Once you’ve setup https with Nginx add following location block to the server block with HTTPS. This will proxy all requests to tomcat http port 8080.

location ~ / {
      proxy_set_header   Host             $host;
      proxy_set_header   X-Real-IP        $remote_addr;
      proxy_set_header   X-Forwarded-For  $proxy_add_x_forwarded_for;
      proxy_set_header   REMOTE_ADDR      $remote_addr;

      proxy_pass         http://localhost:8080;
}

Tomcat is actually really easy to configure so that it is able to generate URLs with https when the https is terminated somewhere higher in the stack. All you need is to add proxyPort, scheme, and secure to the connector in server.xml. Below is an example. If you are using both http and https then just create a new connector for https traffic that is running in different port and proxy only the https traffic to that port.

<Connector port="8080" protocol="HTTP/1.1"
      connectionTimeout="20000"
      redirectPort="8443" 
      proxyPort="443"
      scheme="https"
      secure="true" />

Now to check that it is actually working correctly you need to check your servlet container is seeing those values correctly. For that purpose I’ve created a simple webapp which you can deploy and call through your stack. It shows if each of the three checks pass and additionally shows the request URL and server name so you can also check that any virtual hosts you use are also passed correctly to the servlet container.

Download HTTPS Checker webapp. See the source in Github.

]]>
https://jguru.fi/get-tomcat-see-https-terminated-elsewhere.html/feed 0 431
Top 3 reasons why Liferay projects fail https://jguru.fi/top-3-reasons-liferay-projects-fail.html?utm_source=rss&utm_medium=rss&utm_campaign=top-3-reasons-liferay-projects-fail https://jguru.fi/top-3-reasons-liferay-projects-fail.html#respond Tue, 28 Feb 2017 17:18:16 +0000 https://javaguru.fi/?p=350

I’ve been using Liferay for well over ten years and I’ve seen lots of different ways Liferay projects have been done. There’s been successful projects and then there’s been failed ones. You typically don’t hear about the successful projects but rather the failures but you hardly ever hear why the project failed other than of course that it’s Liferay’s fault. So I wanted to list few of the top reasons in my experience why they fail so that you can avoid them and have higher change of success. These are not really listed in any order of priority but are rather equally important.

1. Team does not embrace or know the Liferay way

If you are not taking full advantage of Liferay features then why are you even using Liferay at all? I see this a lot where people are not willing to commit to Liferay as if you’d be able to just swap it to another product. This leads to having to solve issues that Liferay has already done and provided a nice framework/api to work with. Also in order to take full advantage of Liferay features you need to know about them and that means your team needs to be trained on Liferay.

2. Use of Liferay CE

Liferay is an open source project so why shouldn’t I use CE to do my project? Yes, it is an open source project and that means when you encounter a bug you can either fix it yourself or you can file an issue and hope and pray that someone else fixes it and it makes it to the next release 6 months later. Now if it’s not a clear bug or you can’t produce a clear way of reproducing the unexpected behaviour your issue will most likely not get any attention so you are left with asking help from the forums. Now this is all fine if you have no hurry in getting the project to completion so that would most likely be personal website. Now if you are on a tight deadline and don’t have a capable dedicated team acting as your internal support I’d strongly recommend buying the subscription. Also using CE straight away limits you from staffing your project with the most capable people as Liferay Global Services and Liferay Partners are not allowed to work on projects where CE is used.

3. Use of expert only after there are major issues

This really comes back to reason #1 also but really you should use a real expert on Liferay already in the very beginning before any final decisions are made to audit your architecture and make sure you are fully embracing the Liferay way and your are in the right track to successfully completing the project. Now some will still refuse to hire an expert at this point because one costs too much but if the expert can solve your issue in say 5 days where as your team it might take 30 days was that really all that expensive.

Conclusions

There are many pitfalls with Liferay but having the right people involved in the project from the very beginning will go a long way to making sure the project succeeds.

What are your tips for successful Liferay projects?

]]>
https://jguru.fi/top-3-reasons-liferay-projects-fail.html/feed 0 350
Liferay Yubikey OTP Login https://jguru.fi/liferay-yubikey-otp-login.html?utm_source=rss&utm_medium=rss&utm_campaign=liferay-yubikey-otp-login https://jguru.fi/liferay-yubikey-otp-login.html#respond Mon, 30 Jan 2017 20:14:15 +0000 https://javaguru.fi/?p=366

I bought a Yubikey 4 last fall but didn’t have time to play with it until now. Yubikey is awesome and quite cheap USB authentication key. It also support FIDO U2F in addition to one-time-password. So far I’ve enabled on my Facebook account as well as MacOS Sierra login.  I also wanted to write some code and integrate it with Liferay. As a result I’ve implemented a simple Yubikey OTP login.

I chose to use the OTP instead of FIDO U2F because it was quite simple and I wanted to use it as primary login in place of username and password. It could have also been used as second factor to make the username and password login more secure but I’m leaving that as a future exercise.

I’ve posted the code on Github and plan on publishing it to Liferay marketplace. Check it out and let me know what you think. Full configuration instructions are on the README.md

]]>
https://jguru.fi/liferay-yubikey-otp-login.html/feed 0 366
Creating a custom Nginx build for Ubuntu/Debian https://jguru.fi/creating-custom-nginx-build-ubuntudebian.html?utm_source=rss&utm_medium=rss&utm_campaign=creating-custom-nginx-build-ubuntudebian https://jguru.fi/creating-custom-nginx-build-ubuntudebian.html#comments Thu, 07 Jul 2016 16:04:07 +0000 https://javaguru.fi/?p=174

I’ve been using the RTCamp Ubuntu package for Nginx because it had ngx_cache_purge and ngx_pagespeed modules builtin. The problem with is that it’s still stuck on Nginx 1.8 version which doesn’t support HTTP/2 so I had to figure out how to do my own build based on the latest Nginx mainline version.

These instructions apply to both Debian and Ubuntu even though for my example I use the Ubuntu 16.04 LTS. I’ll be adding ngx_cache_purge, ngx_pagespeed and headers-more modules in to the package.

Prepare for the build

I like to work on anything that require compiling in /usr/local/src so we’ll need to go there and you’ll need to get the nginx package signing key to make apt happy.

cd /usr/local/src
wget http://nginx.org/keys/nginx_signing.key
apt-key add nginx_signing.key

I’m using the mainline of Nginx which gets more frequent updates than stable but is still just as stable.

cat <<-EOF > /etc/apt/source.list.d/nginx.list
deb http://nginx.org/packages/mainline/ubuntu/ xenial nginx
deb-src http://nginx.org/packages/mainline/ubuntu/ xenial nginx
EOF

apt-get update

Get the build dependencies and the source code for nginx.

apt-get build-dep nginx
apt-get source nginx

At the time of writing this the latest version of nginx I get from the repository is 1.11.2. So the nginx source I get are in directory nginx-1.11.2. The debian package files are under debian in the source and that’s where I’m going to create modules directory for the code of the modules I want included.

mkdir nginx-1.11.2/debian/modules
cd nginx-1.11.2/debian/modules

Get the modules

Now in the modules directory I’m going to download and extract the code for each of the modules I want included.

wget https://github.com/FRiCKLE/ngx_cache_purge/archive/2.3.tar.gz
tar -zxvf 2.3.tar.gz

That extracts the ngx_cache_purge module to directory ngx_cache_purge-2.3 remember that as we’ll need it later.

wget https://github.com/pagespeed/ngx_pagespeed/archive/v1.11.33.2-beta.tar.gz
tar -zxvf v1.11.33.2-beta.tar.gz
cd ngx_pagespeed-1.11.33.2-beta/
wget https://dl.google.com/dl/page-speed/psol/1.11.33.2.tar.gz
tar -zxvf 1.11.33.2.tar.gz

For Google Pagespeed you’ll need to get the nginx module and the pagespeed implementation. Again note the module directory ngx_pagespeed-1.11.33.2-beta.

wget https://github.com/openresty/headers-more-nginx-module/archive/v0.30.tar.gz
tar -zxvf v0.30.tar.gz

Again note the directory where headers more is extracted which in this case is headers-more-nginx-module-0.30.

Configure compiler arguments

The last thing to do before we can actually build this thing is we need to add the modules into the actual build. That happens by modifying the rules file under debian directory of nginx. I’ll simply add the –add-module lines as the last arguments to COMMON_CONFIGURE_ARGS. Note the backslash \ at the end of the line, make sure you remember to add it to the currently last argument which in my case is –with-ld-opt=”$(LDFLAGS)” so yours should look like this with the added lines bolded. 

COMMON_CONFIGURE_ARGS := \
 --prefix=/etc/nginx \
 --sbin-path=/usr/sbin/nginx \
 --modules-path=/usr/lib/nginx/modules \
 --conf-path=/etc/nginx/nginx.conf \
 --error-log-path=/var/log/nginx/error.log \
 --http-log-path=/var/log/nginx/access.log \
 --pid-path=/var/run/nginx.pid \
 --lock-path=/var/run/nginx.lock \
 --http-client-body-temp-path=/var/cache/nginx/client_temp \
 --http-proxy-temp-path=/var/cache/nginx/proxy_temp \
 --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp \
 --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp \
 --http-scgi-temp-path=/var/cache/nginx/scgi_temp \
 --user=nginx \
 --group=nginx \
 --with-http_ssl_module \
 --with-http_realip_module \
 --with-http_addition_module \
 --with-http_sub_module \
 --with-http_dav_module \
 --with-http_flv_module \
 --with-http_mp4_module \
 --with-http_gunzip_module \
 --with-http_gzip_static_module \
 --with-http_random_index_module \
 --with-http_secure_link_module \
 --with-http_stub_status_module \
 --with-http_auth_request_module \
 --with-http_xslt_module=dynamic \
 --with-http_image_filter_module=dynamic \
 --with-http_geoip_module=dynamic \
 --with-http_perl_module=dynamic \
 --add-dynamic-module=debian/extra/njs-ef2b708510b1/nginx \
 --with-threads \
 --with-stream \
 --with-stream_ssl_module \
 --with-http_slice_module \
 --with-mail \
 --with-mail_ssl_module \
 --with-file-aio \
 --with-ipv6 \
 $(WITH_HTTP2) \
 --with-cc-opt="$(CFLAGS)" \
 --with-ld-opt="$(LDFLAGS)" \
 --add-module="$(CURDIR)/debian/modules/ngx_cache_purge-2.3" \
 --add-module="$(CURDIR)/debian/modules/ngx_pagespeed-1.11.33.2-beta" \
 --add-module="$(CURDIR)/debian/modules/headers-more-nginx-module-0.30" 

Compile and build the package

Now you are ready to build the deb package. Make sure you are in the nginx source root.

cd /usr/local/src/nginx-1.11.2
dpkg-buildpackage -uc -b
cd ..

Install customized Nginx

Now you should have all the nginx packages built and can install them with dpkg but when you install them you need to remember to tell apt to hold the packages and not upgrade them from a newer release from the repository. If there is a new release that you want to upgrade to you need to repeat these steps.

dpkg --install nginx_1.11.2-1~xenial_amd64.deb
apt-mark hold nginx
dpkg --install nginx-module-geoip_1.11.2-1~xenial_amd64.deb
apt-mark hold nginx-module-geoip

Once you’ve installed the package you can verify that it indeed has the modules by running:

nginx -V 2>&1 | grep ngx_cache_purge -o

If you got back ngx_cache_purge then congrats it worked. If it didn’t then make sure your –add-module argument is correctly done.

]]>
https://jguru.fi/creating-custom-nginx-build-ubuntudebian.html/feed 1 174
Moving a project from one git repository to another while retaining it’s history https://jguru.fi/moving-project-one-git-repository-another-retaining-history.html?utm_source=rss&utm_medium=rss&utm_campaign=moving-project-one-git-repository-another-retaining-history https://jguru.fi/moving-project-one-git-repository-another-retaining-history.html#respond Sat, 11 Jun 2016 15:36:35 +0000 https://javaguru.fi/?p=168

I recently had to move a project from one git repository to another existing repository under different source tree and I wanted to retain the history of each file. This is rather easy once you know how to do it but you can easily jack up things if you don’t know. So I wanted to write clear instructions for the next time I have to do this and hopefully it also helps someone else.

So I’m going to use Liferay as the example for this so that it’s as clear as it can be. We have two repositories liferay-portal and liferay-plugins which are both hosted on Github. Lets assume as we are modularizing things we want to move the akismet-portlet plugin from liferay-plugins repository to liferay-portal repository under modules/apps/akismet. The akismet-portlet is currently under portlets in the liferay-plugins repository. We could simply copy it there but then we would loose the whole history of that project. So we’ll use some git magic to pull the akismet-portlet into the new path under liferay-portal repository. This example assumes you are working with the master branch but the same steps works with any branch.

1.  Clone the liferay-plugins repository

It’s important to create a fresh clone of the repository as what we are about to do to it will make it unusable. There is a way to recover though I learned that the hard way.

git clone [email protected]:liferay/liferay-plugins.git tmp-plugins-repo
cd tmp-plugins-repo

2. Extract the akismet-portlet and it’s history

In this step we’ll checkout the branch from which we want to move the project and then we’ll rewrite the branch so that it only contains commits to the project we want to move. Word of warning if you skipped step one go back to it since this is destructive operation. 

git checkout master
git filter-branch --subdirectory-filter portlets/akismet-portlet -- --all

When you’ve ran the the git filter-branch you’ll notice that the files from portlets/akismet-portlet are now in the root and nothing else in the repository appears to exist.

3. Move the project to it’s new path in the new repository

Next we need to move the files to the path they are going to be in the liferay-portal repository which is under modules/apps/akismet.

mkdir -p modules/apps/akismet/akismet-portlet
git mv -k * modules/apps/akismet/akismet-portlet
git commit -a "Moved akismet-portlet to modules/apps/akismet/akismet-portlet"

Now the files are in their right place and the changes have been committed to the repository.

4. Pull the akismet-portlet from liferay-plugins to liferay-portal repository

Now you need to clone the liferay-portal repository if you haven’t done it yet. I’m assuming that it is next to the tmp-plugins-repo.

git remote add tmp-plugins-repo ../tmp-plugins-repo
git checkout -b akismet-portlet-move
git pull tmp-plugins-repo master

Now you are ready to merge it to master or send a pull request to someone who will merge it. The only thing is that it has to be merged and can’t be rebased.

Now I used Liferay’s Github repository as example but at Liferay when we move things around we always do it with plain copy followed by massive commit that looses all history. If you don’t care about file history then that works just fine but don’t come crying to me when you try to track down why a certain line of code was added.

]]>
https://jguru.fi/moving-project-one-git-repository-another-retaining-history.html/feed 0 168
Setting up https with Let’s Encrypt on Nginx https://jguru.fi/setting-https-lets-encrypt-nginx.html?utm_source=rss&utm_medium=rss&utm_campaign=setting-https-lets-encrypt-nginx https://jguru.fi/setting-https-lets-encrypt-nginx.html#comments Sun, 29 May 2016 15:40:24 +0000 https://javaguru.fi/?p=162

Let’s Encrypt is an awesome free, automated and open way of protecting your site with https. As you may have noticed this site is using Let’s Encrypt certificate and I’ve started rolling it out to all my other sites too. With free https certificate there’s really no excuse not to use https only. In fact if you want to take advantage of HTTP/2 you’ll need https since no one currently supports it unencrypted even though the spec doesn’t mandate it.

Even if your site doesn’t have any sensitive information if you ever update/login to it from from a untrusted location such as Café your login credential might get disclosed to someone malicious and like most of us you’ll probably use the same credentials in multiple places that might be a real bad thing. Now I didn’t come up with all these steps I’m about to explain here but the credit rather goes to Bjørn Johansen whose blog posts I’ll summarise here. I’ll link all the posts I used as initial reference to set this up on my server at the end of the post in case you’ll need more details. Let’s Encrypt support for Nginx is still experimental and buggy so you’ll need to use manual method to install it.

Setting up Let’s Encrypt client

We’ll use git to get the client and bc is needed later so in Ubuntu/Debian you’ll install them with apt.

apt-get install git bc

Now with git we’ll clone the Let’s Encrypt client repo.

git clone https://github.com/letsencrypt/letsencrypt /opt/letsencrypt

Preparing Nginx

To verify the domain Let’s Encrypt verification server will look for verification files created by the client in a subdirectory of your webroot under:  /.well-know/acme-challenge/

Since I have lots of sites under the same Nginx and I want them all to use https eventually I’ve created a configuration snippet under /etc/nginx/global named letsencrypt-challenge.conf with the following content:

# Allow access to the ACME Challenge for Let’s Encrypt
location ~ /\.well-known\/acme-challenge {
    allow all;
}

This is not required if you don’t block files starting with a dot.

The server section for the site could look something like this:

server {
    listen 80;
    server_name example.com;
    root /var/www/example.com;

    include global/letsencrypt-challenge.conf;
}

Once you’ve added the global/letsencrypt-challenge.conf in don’t forget to reload your nginx.

service nginx reload

Get the certificate from Let’s Encrypt

Now you are ready to use the Let’s Encrypt client to request a certificate for your domain.

/opt/letsencrypt/letsencrypt-auto certonly --agree-tos --webroot -w /var/www/example.com \
-d example.com

If all goes well you’ll get four files under /etc/letsencrypt/live/example.com. Those files are privkey.pem, cert.pem, chain.pem and fullchain.pem. You’ll need those to setup ssl in Nginx but before we do that let’s make sure the certificate is automatically renewed because it will be valid only 90 days.

Setup auto renew for certificate

So like I just mentioned the certificates from Let’s Encrypt are only valid for 90 days and I’m sure you don’t want to try to remember to renew them manually so we’ll setup a cron job to do that automatically for us. There’s already a nice script that will do all the heavy lifting for us. Well just need to download it and make it executable for root.

curl https://gist.githubusercontent.com/bjornjohansen/aaf0d29f225ffd1ea222/raw/e1b4bec81d32dba86e2d4e9d70a2b9f4d6cca773/le-renew.sh > /opt/le-renew.sh
chown root:root /opt/le-renew.sh
chmod 0500 /opt/le-renew.sh

Please note that this script assumes you installed Let’s Encrypt client in /opt/letsencrypt if you didn’t please adjust the path in the script. It’s also good idea to try to understand what the script does and not just blindly execute any script you’ve downloaded from the web as root.

The script tries to renew the certificate for you when the expiration date is less than 30 days away. Well setup cron to run the script once a week so even if it fails for some reason there’s still plenty of time to get it right. Create a file /etc/cron.d/letsencrypt-renew with following content:

32 5 * * 1 root /opt/le-renew.sh example.com /var/www/example.com > /dev/null 2>&1

Setup https in Nginx

In order to https you’ll need a new server block that listens the port 443 and you’ll need to tell nginx where the private key and certificate are found for this domain.

server {
    listen 443 ssl;
    server_name example.com;
    root /var/www/example.com;

    ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;

    include global/letsencrypt-challenge.conf;
}

That is what is required at the minimum but we are not going to stop there as there are six more steps we can take to make it more secure and optimize it’s https performance.

1) Connection credential caching

Most of the https overhead is in the initial connection setup and by caching the parameters we’ll significantly improve subsequent requests. All you need is following lines in your config:

ssl_session_cache shared:SSL:20m;
ssl_session_timeout 60m;

This creates a shared cache between all the worker processes. 1MB cache can store around 4000 sessions so this should be plenty for most sites. You can adjust it smaller if you are concerned but Nginx should be smart enough not to consume all memory just for the cache.

2) Disable SSL

This may seem counterintuitive but https is actually SSL (secure socket layer) and TLS (transport layer security). Technically SSL has been superseded by TLS and SSL shouldn’t be used because of many weaknesses it has. Disabling SSL means you are making your site not accessible by IE6 but do you really care about that.

The latest version of TLS is 1.2 but there are still modern browsers that only support 1.0 so we should also support it. Just add following line to you nginx config:

ssl_protocols TLSv1 TLSv1.1 TLSv1.2;

3) Optimize cipher suites

Encryption is at the core of https and some of the ciphers are more secure and some are not secure at all anymore so we’ll want to tell the client the preferred order of cipher suites to use. All of the ciphers on this list use forward secrecy but with this list you’ll loose support for all IE versions on Windows XP but again do you really care.

ssl_prefer_server_ciphers on;
ssl_ciphers ECDH+AESGCM:ECDH+AES256:ECDH+AES128:DH+3DES:!ADH:!AECDH:!MD5

4) Generate DH parameters

DH parameters affect the Diffie-Hellman key exchange which where client and server negotiate the key for the session. By default it’s only 1024 bit key and our Let’s Encrypt key is 2048 bits so we need to make Nginx also use 2048 bits for DH key exchange otherwise it’s not as secure as it could be. The only downside is that Java 6 doesn’t support anything over 1024 but again do you really care about that.

Generate the DH parameters file with 2048 bit long prime.

openssl dhparam 2048 -out /etc/nginx/cert/dhparam.pem

Add the dhparam to your config file:

ssl_dhparam /etc/nginx/cert/dhparam.pem;

5) Enable OCSP stapling

When a proper browser is presented with a certificate it will check to see if that certificate is revoked from the issuer and that adds extra overhead. This is where Online Certificate Status Protocol (OCSP) comes to rescue. The web server contacts the certificate authority’s OCSP server at regular interval to get a signed response which it then staples on the handshake when the connection is setup. This is much more efficient than having the browser go out to do the check.

To make sure the response from the CA is not tampered with nginx needs to check the CA root and intermediate certificates. Let’s Encrypt client already provides us with all the required certificates so all we need to do is configure stapling and the ssl_trusted_certificate.

ssl_stapling on;
ssl_stapling_verify on;
resolver 8.8.8.8 8.8.4.4;
ssl_trusted_certificate /etc/letsencrypt/live/example.com/chain.pem;

The resolver is a must and you can use Google public DNS servers as I’ve used here or you can use your own.

6) Strict transport security

HTTP Strict Transport Security (HSTS) is a way to tell the browser that this domain should only be used over https. Even though you might setup redirection from http to https any requests that go over http are insecure. This feature is supported in all modern browsers and it’s really simple to enable you’ll just add a header Strict-Transport-Security with the maximum age. Then for the specified amount of time the browser doesn’t even try to access the site via http.

add_header Strict-Transport-Security "max-age=31536000" always;

Putting it all together

That’s a lot of configuration so here is a complete example configuration:

server {
    listen 443 ssl;
    server_name example.com;
    root /var/www/example.com;

    ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;

    include global/letsencrypt-challenge.conf;

    ssl_session_cache shared:SSL:20m;
    ssl_session_timeout 60m;

    ssl_protocols TLSv1 TLSv1.1 TLSv1.2;

    ssl_prefer_server_ciphers on;
    ssl_ciphers ECDH+AESGCM:ECDH+AES256:ECDH+AES128:DH+3DES:!ADH:!AECDH:!MD5

    ssl_dhparam /etc/nginx/cert/dhparam.pem;

    ssl_stapling on;
    ssl_stapling_verify on;
    resolver 8.8.8.8 8.8.4.4;
    ssl_trusted_certificate /etc/letsencrypt/live/example.com/chain.pem;

    add_header Strict-Transport-Security "max-age=31536000" always;
}

Optional steps

What you most likely want to do is redirect from http to https. That is done by replacing your old server with following:

server {
    listen 80;
    server_name example.com;
    root /var/www/example.com;

    return 301 https://$host$request_uri;
}

Since you have https setup you might want to enable HTTP/2 if you are using new enough Nginx. That is very simple you just add the word http2 after ssl in the listen like this:

listen 443 ssl http2;

But if you are running an older nginx you can still enable SPDY which has been superseded by HTTP/2 but it might still be useful until you can enable HTTP/2. SPDY is enabled similarly to HTTP/2.

listen 443 ssl spdy;

Test your configuration

So how do you know you configured everything correctly. The site might be working in your browser but that still doesn’t guarantee everything is correct. Qualys SSL Labs provides a nice scanner to test your setup. If you configured everything correctly you should get A+ rating just as is shown below for this site.

ssltest_rating

References:
[1] Let’s Encrypt for Nginx
[2] Optimizing HTTPS on Nginx

]]>
https://jguru.fi/setting-https-lets-encrypt-nginx.html/feed 1 162
Monitoring Apache HTTPd with New Relic https://jguru.fi/monitoring-apache-httpd-with-new-relic.html?utm_source=rss&utm_medium=rss&utm_campaign=monitoring-apache-httpd-with-new-relic https://jguru.fi/monitoring-apache-httpd-with-new-relic.html#respond Fri, 28 Aug 2015 07:18:47 +0000 http://javaguru.fi/?p=124 When figuring out what’s wrong with a sites performance it’s important to get facts about every aspect and component involved with that site. Apache is quite often used in front of java applications and it’s the app server for php applications. Bad apache configuration can make a site seem sluggish even when there are plenty of other resources available so it’s important to see what’s going on here. From the first screen of Apache HTTPd plugin you’ll get a nice overview of all your monitored apaches.

New Relic Plugins Apache Listing

Drilling down to a single server overview shows request velocity, cpu load, busy/idle workers and even bytes sent over time.

New Relic Plugins Apache Overview

Going to throughtput shows throughtput details over time.

New Relic Plugins Apache Throughtput

Workers section shows you what is happening with the workers. If you have a lot of busy workers you can see in what state their are and that might provide some insight to what is going on.

New Relic Plugins Apache Workers

Installing Apache HTTPd agent for New Relic

1) I’m using the MeetMe New Relic Agent to monitor Apache HTTPd. It’s written in python and we’ll need to install pip. The following is using the Ubuntu python-pip package. You can find alternative install methods from pip docs.

apt-get install python-pip libyaml-dev python-dev

2) Next use pip to install newrelic-plugin-agent. When I ran it I got some errors but it still worked.

pip install newrelic-plugin-agent

3) Next we’ll need to create the configuration file for the agent. You can start by copying /opt/newrelic-plugin-agent/newrelic-plugin-agent.cfg or just use what I have posted below. The first thing you need is to set license_key. You can find your license key from your account settings page on rpm.newrelic.com. The second one is to add apache_httpd configuration. You can add multiple httpds to monitor.

cat - <<EOF>> /etc/newrelic/newrelic-plugin-agent.cfg
%YAML 1.2
---
Application:
 license_key: YOUR_LICENSE_KEY
 wake_interval: 60
 #newrelic_api_timeout: 10
 #proxy: http://localhost:8080

 apache_httpd:
  - name: localhost
    scheme: http
    host: localhost
    verify_ssl_cert: true
    port: 80
    path: /server-status

Daemon:
 user: newrelic
 pidfile: /var/run/newrelic/newrelic-plugin-agent.pid

Logging:
 formatters:
   verbose:
     format: '%(levelname) -10s %(asctime)s %(process)-6d %(processName) -15s %(threadName)-10s %(name) -45s %(funcName) -25s L%(lineno)-6d: %(message)s'
 handlers:
   file:
     class : logging.handlers.RotatingFileHandler
     formatter: verbose
     filename: /var/log/newrelic/newrelic-plugin-agent.log
     maxBytes: 10485760
     backupCount: 3
 loggers:
   newrelic_plugin_agent:
     level: INFO
     propagate: True
     handlers: [console, file]
   requests:
     level: ERROR
     propagate: True
     handlers: [console, file]
EOF

4) Make sure you have enabled mod_status in your apache and you’ve allowed access from the host your agent is running if it’s not running in the same host as your apache.

5) Then we need to add a init script for the newrelic-plugin-agent. There’s one under /opt/newrelic-plugin-agent but for me it was incomplete file so I just downloaded the one from github.

wget https://raw.githubusercontent.com/MeetMe/newrelic-plugin-agent/master/etc/init.d/newrelic-plugin-agent.deb
mv newrelic-plugin-agent.deb /etc/init.d/newrelic-plugin-agent
chmod 755 /etc/init.d/newrelic-plugin-agent
update-rc.d newrelic-plugin-agent defaults

Now you can start the newrelic-plugin-agent with

service newrelic-plugin-agent start

Now in few minutes you should see your Apache HTTPd server(s) listed under Plugins HTTPd on rpm.newrelic.com.

]]>
https://jguru.fi/monitoring-apache-httpd-with-new-relic.html/feed 0 124
Monitoring Nginx with New Relic https://jguru.fi/monitoring-nginx-new-relic.html?utm_source=rss&utm_medium=rss&utm_campaign=monitoring-nginx-new-relic Wed, 29 Apr 2015 03:20:41 +0000 http://javaguru.fi/?p=100 Apache HTTPd has always been my goto httpd, reverse proxy and load balancer but lately I’ve grown more interested in Nginx. It’s very high performance and lightweight not to mention easy to configure. Of course with my currently a single Nginx I wanted to see how could I hook it up my monitoring. Turns our there’s a New Relic agent directly from Nginx.

New Relic Plugins Nginx Listing

From the overview you can see the number of active and idle connections as well as the request rate.

New Relic Plugins Nginx Overview

From connections you’ll even more connection details. With very little connections and requests my graphs are currently slightly boring. In addition to connection details you can find more details about requests, upstreams, servers and cache.

New Relic Plugins Nginx Connections
Installing New Relic Monitoring Agent for Nginx

1) First you need to add the Ubuntu package repository for Nginx. If you’ve done this already when you installed Nginx you can skip to next step. If you are not using Ubuntu 14.04 like I am you can find the other Linux packages from Nginx website.

wget http://nginx.org/keys/nginx_signing.key
apt-key add nginx_signing.key

cat - <<-EOF >> /etc/apt/sources.list.d/nginx.list
deb http://nginx.org/packages/ubuntu/ trusty nginx
deb-src http://nginx.org/packages/ubuntu/ trusty nginx
EOF

apt-get update

2) Next you need to install the Nginx New Relic Agent

apt-get install nginx-nr-agent

3) Next you’ll need to edit the agent configuration file in /etc/nginx-nr-agent/nginx-nr-agent.ini. You need to add your license key which you can find from your account settings page on rpm.newrelic.com.

newrelic_license_key=YOUR_LICENSE_KEY

Additionally you need to add a new source which points to your Nginx status url.

[source1]
name=localhost
url=http://localhost/nginx_stub_status

4) You’ll need to add a server block to Nginx for the status. Since I had very simple configuration in my Nginx I just added the following to /etc/nginx/sites-enabled/default

server {
   listen 127.0.0.1:80;
   server_name localhost;

   location = /nginx_stub_status {
     stub_status on;
     allow 127.0.0.1;
     deny all;
   }
}

5) Last thing you need to do is reload Nginx and start the Nginx New Relic Agent.

service nginx reload
service nginx-nr-agent start

Now in few minutes you should start seeing your Nginx server listed under Plugins Nginx on rpm.newrelic.com.

]]>
100
Monitoring MariaDB / MySQL with New Relic https://jguru.fi/monitoring-mariadb-mysql-new-relic.html?utm_source=rss&utm_medium=rss&utm_campaign=monitoring-mariadb-mysql-new-relic Mon, 27 Apr 2015 03:32:44 +0000 http://javaguru.fi/?p=98 In order to size your system correctly you need metrics and you need those also from your database. Not only is it important to know what’s happening in your database for capacity planning it’s also invaluable information when something goes wrong. If you have a database related performance problem in your code seeing how the fix effects the database can help you see if you actually solved the problem. Also looking at database statistics you might be able to spot a issue before it becomes a serious problem.

New Relic has a wonderful plugin framework and there’s a ton of ready made plugins and also SDK and API for things it doesn’t already support. MySQL plugin is one of those ready made plugins and it provides all the key information you’ll need. The MySQL plugin page quickly shows what’s going on all monitored databases.
New Relic Plugins MySQL

When you drill down to a individual database server the overview shows the SQL volume and how it’s split between reads and writes. More key metrics are displayed under Key Utilizations. You’ll also find database connections and network traffic on this page.

New Relic Plugins MySQL Overview

Going further down to Query analysis you’ll see in more details about the queries.

New Relic Plugins MySQL Query Analysis

If you are using InnoDB there’s a separate page to show key metrics from InnoDB.

New Relic Plugins MySQL InnoDB Metrics

Installing MySQL / MariaDB Monitoring

1) MySQL plugin can easily be installed with New Relic platform installer. So the first thing you need to do is install the platform installer. You’ll need your New Relic license key which you can find from account settings on rpm.newrelic.com. Once you have that you can install it with following one liner which is for 64bit Debian and Ubuntu.

LICENSE_KEY=YOUR_LICENSE_KEY bash -c "$(curl -sSL https://download.newrelic.com/npi/release/install-npi-linux-debian-x64.sh)"

2) Next go to the newly created newrelic-npi directory and run install. You’ll want to answer yes to all the questions and when prompted to configure the plugin grab the configuration from the next step.

./npi install nrmysql

3) If you skipped configuration you can configure the plugin afterward too. You can find the configuration file under newrelic-npi from plugins/com.newrelic.plugins.mysql.instance/newrelic_mysql_plugin-2.0.0/config/plugin.json. Below is a sample configuration for MariaDB (works for MySQL) running on localhost and we’ll be creating a separate user newrelic with password somepassword which the plugin will use to gather data. You can connect to multiple databases with the same agent. I usually install this agent on the same server my nagios is running on.

{
 "agents": [
   {
     "name" : "MariaDB on localhost",
     "host" : "localhost",
     "metrics" : "status,newrelic,buffer_pool_stats,innodb_status,innodb_mutex",
     "user" : "newrelic",
     "passwd" : "somepassword"
   }
 ]
}

4) Now we need to create a user in the database and grant some rights to it.

cat - <<EOF | mysql -u root -p
CREATE USER newrelic@'%' IDENTIFIED BY 'somepassword';
GRANT PROCESS,REPLICATION CLIENT ON *.* TO newrelic@'%';
EOF

5) Last thing is to start the service but before we do make sure you have Java installed as this agent is written in Java. If you don’t have Java installed check my unattended Java install script. Otherwise you can proceed to start the service that should have been created during npi install if you answered all the questions correctly.

service newrelic_plugin_com.newrelic.plugins.mysql.instance start

Now it may take few minutes before you see your server under Plugins MySQL in rpm.newrelic.com. If it doesn’t check the log under plugins/com.newrelic.plugins.mysql.instance/newrelic_mysql_plugin-2.0.0/logs/ for hints and make sure the agent actually started.

]]>
98
Monitoring Ubuntu / Debian Server with New Relic https://jguru.fi/monitoring-ubuntu-server-new-relic.html?utm_source=rss&utm_medium=rss&utm_campaign=monitoring-ubuntu-server-new-relic Fri, 24 Apr 2015 04:00:07 +0000 http://javaguru.fi/?p=96 With New Relic Server Monitoring you’ll see all the important information about your system with just one glance. This is a essential tool for troubleshooting performance issues and also seeing that your system is properly sized. Sometimes poor application performance has nothing to do with the application but rather the system it’s running. If the system is not correctly sized you might be running out of memory, cpu or the bottle neck could be disk io. Without proper monitoring it is very hard to pinpoint the cause.

The servers listing gives a nice overview of all servers and you an easily see if there’s any issues.

New Relic Servers
New Relic Servers

When looking at a specific server you’ll see a history of it’s CPU and memory usage as well as load average and network I/O. If you have any APM enabled applications installed you’ll see a overview of their response times, throughput and error rate. You’ll also see some of the top processes running on the server.

New Relic Servers Overview
New Relic Server Overview

When you drill down to processes listing you’ll quickly see the top memory and cpu consumers. You can also look at the history of individual processes.

New Relic Server Processes
New Relic Server Processes

Installing New Relic Server Monitoring on a Ubuntu / Debian Server

1) Add an apt source for New Relic.

cat - <<-EOF >> /etc/apt/sources.list.d/newrelic.list 
# newrelic repository list 
deb http://apt.newrelic.com/debian/ newrelic non-free
EOF

2) You’ll need to get the key for New Relic repository and then update apt sources. After that you can install newrelic-sysmond.

apt-key adv --recv-keys --keyserver keyserver.ubuntu.com 0xB31B29E5548C16BF
apt-get update
apt-get install newrelic-sysmond

3) Next you’ll need to tell it your license key so that it reports the data to your account. You can find your license key from your account settings page on rpm.newrelic.com. You can either edit the configuration file or you can set the license like shown below:

/usr/sbin/nrsysmond-config --set license_key=YOUR_LICENSE_KEY

4) Finally once everything is configured you can start the system monitor daemon.

service newrelic-sysmond start

Now in few minutes you should start seeing your server listed under Servers on rpm.newrelic.com

]]>
96
Monitoring with New Relic https://jguru.fi/monitoring-new-relic.html?utm_source=rss&utm_medium=rss&utm_campaign=monitoring-new-relic Thu, 23 Apr 2015 19:14:45 +0000 http://javaguru.fi/?p=93 New Relic is a wonderful software analytics suite that is 100% SaaS. I love it because it’s so easy to setup compared to Nagios, MRTG and other on premise software. Also their Lite edition is free with 24 hour data retention and for 30 days you’ll get to see the power of the Pro version. I still use nagios for my main monitoring and create some key graphs with MRTG but the data junkie in me loves all the data New Relic gathers and shows in nice graphs.

New Relic has seven parts or products as they call them. Those are APM, Insights, Mobile, Browser, Synthetics, Servers and Plugins. I have myself used only APM, Browser, Server and Plugins which are included in the free Lite edition.

APM

APM is the application monitoring part. It focuses on providing information about the application itself. The Lite edition shows you response times, throughput, web transaction information. It’s basically a low impact profiler. With the Pro subscription you get much deeper analysis of time spent on executing SQL, JVM statistics etc.

New Relic APM Java Overview

Browser

The browser provides insights on client side performance. Even though your application might respond quickly the users perceived performance could be poor because of network performance or even how the page is rendering on the browser.

Servers

Servers as the name suggests provides performance information about the actual server your applications are running on.

New Relic Servers Overview

Plugins

There’s a ton of plugins to provide monitoring capabilities to systems not otherwise supported by New Relic and with it’s SDKs and API you can build your own plugins. Some of the plugins I have used are for MySQL/MariaDB, Nginx and Apache.

New Relic Plugins MySQL Overview

Mobile

Mobile is APM for mobile applications.

Synthetics

Synthetics allows you to test your application from around the world. It can check business critical user flows and interaction to make sure your site available and functioning from around the world.

Insights

Insights is a paid feature that combines business metrics with performance data. It can combine data from APM, Browser, Mobile and Synthetics for deeper analysis and segmentation and filtering.

Since I recently installed bunch of new servers and I had to refresh my memory on how I installed and configured each of the agents so I decided to write a series of articles on each of them. Here’s a list of topics I’m going to publish and as I publish them I’ll link the topic to the article. These topics will cover APM with Java and PHP, Servers and Plugins for MySQL, Nginx and Apache

]]>
93
Unattended Java install on Ubuntu 14.04 https://jguru.fi/unattended-java-install-ubuntu-14-04.html?utm_source=rss&utm_medium=rss&utm_campaign=unattended-java-install-ubuntu-14-04 Wed, 22 Apr 2015 18:27:41 +0000 http://javaguru.fi/?p=91 I like to to automate all the tasks I do often and of the things many of my virtual servers need is Java JDK. Unfortunately the Oracle JDK is not available as debian package but there’s a way to make it. This is where WebUpd8 Team PPA comes in as they provide installer for java6, java7 and java8.

Below is the script I use to install it unattended. You can download it also from github gist. If you want Java6 then just use oracle-java6-installer and for Java 8 oracle-java8-installer. This also works other ubuntu versions just substitute trusty with the code of your ubuntu release like precise for Ubuntu 12.04. Hope you find this useful.

cat - <<-EOF >> /etc/apt/sources.list.d/webupd8team-java.list
# webupd8team repository list 
deb http://ppa.launchpad.net/webupd8team/java/ubuntu trusty main
# deb-src http://ppa.launchpad.net/webupd8team/java/ubuntu trusty main
EOF

apt-key adv --recv-keys --keyserver keyserver.ubuntu.com 0xEEA14886

echo debconf shared/accepted-oracle-license-v1-1 select true | /usr/bin/debconf-set-selections
echo debconf shared/accepted-oracle-license-v1-1 seen true | /usr/bin/debconf-set-selections

apt-get update
apt-get install oracle-java7-installer
]]>
91
New Site New Focus https://jguru.fi/new-site-new-focus.html?utm_source=rss&utm_medium=rss&utm_campaign=new-site-new-focus https://jguru.fi/new-site-new-focus.html#respond Wed, 28 Jan 2015 08:53:55 +0000 http://javaguru.fi/?p=28 It’s been a while since I last posted anything and the site has been quite stale and out dated. So I thought I’d update it to a more modern responsive site that looks good on both desktop and mobile. As I wanted to focus more on actual writing and less on building a site I moved back to WordPress which is still the best blogging platform. My only concern is now security as php apps are much more prone to being hacked than Java ones.

With this new focus I’m also going to write about much more broad topics than just Liferay. As Liferay is moving it’s core towards OSGi I’ve studied it a lot and grown to love it although it doesn’t come without it’s own challenges especially for some moving to it from Java EE. As some of you may know I’ve been running and administering my own Linux servers for more than 15 years so some of the new topics will be about virtualization, containers, monitoring etc. I still read a lot of books so when I read one that deserves a mention I’ll write a book review. I hope you enjoy what’s to come.

As I built this new site I only migrated posts that had been read in the past year so if you run into something that’s no longer available do email me I still have them saved. Also all the old urls should be automatically redirected to the new ones.

]]>
https://jguru.fi/new-site-new-focus.html/feed 0 28
Monitoring and Graphing Liferay with MRTG https://jguru.fi/monitoring-and-graphing-liferay-with-mrtg.html?utm_source=rss&utm_medium=rss&utm_campaign=monitoring-and-graphing-liferay-with-mrtg https://jguru.fi/monitoring-and-graphing-liferay-with-mrtg.html#respond Thu, 17 Oct 2013 11:10:01 +0000 http://javaguru.fi/?p=13 MRTG (The Multi Router Traffic Grapher) is usually used to monitor SNMP enabled network devices and draw graphs of how much traffic has passed through each interface. It can also be used to graph any two values (in/out) and I use it for graphing cpu usage, loadavg, iowait, used memory, disk space and temperature sensor values that I can read through SNMP. Liferay however doesn’t support SNMP so I developed a perl script that can read JMX MBean values using JMX4Perl and Jolokia. I’m going to assume you have JMX4Perl and Jolokia setup already the way I describe it in my earlier post: Monitoring Liferay with Nagios, Jolokia and JMX4Perl. You should also note that MRTG won’t send you any alerts so it’s a good idea to setup Nagios to do just that.

Now you might wonder why would you need MRTG if you already have Nagios. Nagios operates on the present value although there’s a add-on nagios grapher that can create graphs like MRTG does but I like MRTG more because you can see all the graphs on one page. Being able to see a full overview of the system is very important when trying to identify performance problems. This is also why you want to get more information out of the application, Liferay in this case. My script will help in reading connection pool and thread pool utilization as well as heap usage. Those are essential when doing troubleshooting.

First you’ll need to install and setup MRTG. I’m not going to go into details on that because it dependes on your system and the internet is full of guides to do it. Once you have it done you’ll need to download my mrtg-jmx4perl.pl script which is available in my github repository. For rest of this post I’m going to assume it’s located in /usr/local/bin/mrtg-jmx4perl.pl but it’s up to you where you put it. Just adjust the script path accordingly.

Monitoring c3p0 connection pool

Getting the values for c3p0 is a little bit tricky because it will generate a unique mbean name based on the identity token it generates for the connection pool every time the server is started. Because of this my script assumes you only have one c3p0 connection pool if you have multiple pool you’ll need to add additional logic in to the mrtg-jmx4perl to find the correct mbean. Notice that this is the case if you configure Liferay to use connection pool from portal-ext.properties instead of using a JNDI resource. We can read the mbean for c3p0 by using mbean name as “com.mchange.v2.c3p0:type=*,*” and the attributes we are most interested are numConnectionsAllUsers and numBusyConnectionsAllUsers. Below is a sample mrtg configuration snippet.

Target[dbpool]: `/usr/local/bin/mrtg-jmx4perl.pl --server=servername --mbean="com.mchange.v2.c3p0:type=*,*" --attribute="numConnectionsAllUsers numBusyConnectionsAllUsers"`
MaxBytes[dbpool]: 20
Title[dbpool]: DB Pool
PageTop[dbpool]: <h1>DB Pool</h1>
WithPeak[dbpool]: dwmy
Unscaled[dbpool]: dwmy
Options[dbpool]: growright,unknaszero,nopercent,gauge
YLegend[dbpool]: Connections
ShortLegend[dbpool]: 
LegendI[dbpool]: Connections
LegendO[dbpool]: Busy Connections
Legend1[dbpool]: Connections
Legend2[dbpool]: Busy Connections
Legend3[dbpool]: Peak Connections
Legend4[dbpool]: Peak Busy Connections

Here’s a daily graph from one of my Liferay portal servers.

mrtg - db pool connections

Monitoring Tomcat AJP Thread Pool

This one is pretty easy because the the mbean name is static but it does vary depending on do tomcat version and connector you are using. In Tomcat 7 with native library the name for ajp thread pool is Catalina:type=ThreadPool,name=”ajp-apr-8009″. Without native library it would be ajp-bio-8009. In tomcat 6 my ajp pool mbean name is Catalina:type=ThreadPool,name=jk-8009. Notice the lack of double quotes in the name. You can easily check the name using jconsole. So for this one the config looks like:

Target[ajp-threadpool]: `/usr/local/bin/mrtg-jmx4perl.pl --server=servername --mbean="Catalina:type=ThreadPool,name=\"ajp-apr-8009\"" --attribute="currentThreadCount currentThreadsBusy"`
MaxBytes[ajp-threadpool]: 50
Title[ajp-threadpool]: AJP Thread Pool
PageTop[ajp-threadpool]: <h1>AJP Thread Pool</h1>
WithPeak[ajp-threadpool]: dwmy
#Unscaled[ajp-threadpool]: dwmy
Options[ajp-threadpool]:  growright,unknaszero,nopercent,gauge
YLegend[ajp-threadpool]: Threads
ShortLegend[ajp-threadpool]: 
LegendI[ajp-threadpool]: Threads
LegendO[ajp-threadpool]: Busy Threads
Legend1[ajp-threadpool]: Threads
Legend2[ajp-threadpool]: Busy Threads
Legend3[ajp-threadpool]: Peak Threads
Legend4[ajp-threadpool]: Peak Busy Threads

Here’s a daily graph of a thread pool.

mrtg - thread pool

Monitoring Heap Usage

The last one we are going to monitor is Java Heap usage. It can be read from java.lang:type=Memory using attribute HeapMemoryUsage and path used. Now we are reading only one value.

Target[heap]: `/usr/local/bin/mrtg-jmx4perl.pl --server=servername --mbean="java.lang:type=Memory" --attribute="HeapMemoryUsage" --path="used"`
MaxBytes[heap]: 1296302080
Title[heap]: Heap
PageTop[heap]: <h1>Heap</h1>
WithPeak[heap]: dwmy
Unscaled[heap]: dwmy
Options[heap]:  growright,unknaszero,nopercent,gauge,noo
YLegend[heap]: bytes
ShortLegend[heap]: 
kilo[heap]: 1024
LegendI[heap]: Used
Legend1[heap]: Used
Legend3[heap]: Peak Used

Here’s a daily graph of heap memory usage.

mrtg - heap usage

You can download the full sample-mrtg.cfg from github.

That’s how easy it is to monitor and graph Liferay or pretty much any Java webapp using MRTG. You could easily use this to monitor ehcache utilization or anything else that’s accessible via JMX.

]]>
https://jguru.fi/monitoring-and-graphing-liferay-with-mrtg.html/feed 0 13
Liferay Maven Support in Liferay 6.1 GA3 https://jguru.fi/liferay-maven-support-liferay-6-1-ga3.html?utm_source=rss&utm_medium=rss&utm_campaign=liferay-maven-support-liferay-6-1-ga3 Tue, 18 Jun 2013 22:25:20 +0000 https://web.liferay.com/web/mika.koivisto/blog/-/blogs/liferay-maven-support-in-liferay-6-1-ga3 We've finally released both CE and EE versions of Liferay 6.1 GA3 and along with those releases we've also released the corresponding versions of Liferay Maven Support and Portal artifacts. The version numbers are 6.1.2 for CE GA3 and 6.1.30 for EE GA3. With this release there is one significant improvement in the Liferay Maven Plugin and that is they are no longer directly dependent on a Liferay Portal version. We could have just released one version and it would have worked with either portal version, in fact they both work with any portal version starting from 6.1.0. In the future we will probably move to a single release of Liferay Maven Support which will eventually have it's own release cycle completely independent of the portals release cycle.  

All the archetypes will now have a separate property for Liferay Maven Plugin version called liferay.maven.plugin.version. The plugin will also now require you to tell which portal version you are developing against and you'll do that by providing it liferayVersion in the configuration section. Here's a example from liferay-theme-archetype:


<plugin>
  <groupId>com.liferay.maven.plugins</groupId>
  <artifactId>liferay-maven-plugin</artifactId>
  <version>${liferay.maven.plugin.version}</version>
  <executions>
    <execution>
      <phase>generate-sources</phase>
      <goals>
        <goal>theme-merge</goal>
        <goal>build-css</goal>
        <goal>build-thumbnail</goal>
      </goals>
    </execution>
  </executions>
  <configuration>
    <autoDeployDir>${liferay.auto.deploy.dir}</autoDeployDir>
    <appServerDeployDir>${liferay.app.server.deploy.dir}</appServerDeployDir>
    <appServerLibGlobalDir>${liferay.app.server.lib.global.dir}</appServerLibGlobalDir>
    <appServerPortalDir>${liferay.app.server.portal.dir}</appServerPortalDir>
    <liferayVersion>${liferay.version}</liferayVersion>
    <parentTheme>${liferay.theme.parent}</parentTheme>
    <pluginType>theme</pluginType>
    <themeType>${liferay.theme.type}</themeType>
  </configuration></plugin>

Please remember that the plugin will still be affected any bugs in the Liferay Portal Version so if you have patches installed you should point the plugin to a patched portal bundle by setting the liferay.app.server.xxx properties. If you discover any bugs in any of the plugin mojos please report them to our MAVEN Jira project.

]]>
We’ve finally released both CE and EE versions of Liferay 6.1 GA3 and along with those releases we’ve also released the corresponding versions of Liferay Maven Support and Portal artifacts. The version numbers are 6.1.2 for CE GA3 and 6.1.30 for EE GA3. With this release there is one significant improvement in the Liferay Maven Plugin and that is they are no longer directly dependent on a Liferay Portal version. We could have just released one version and it would have worked with either portal version, in fact they both work with any portal version starting from 6.1.0. In the future we will probably move to a single release of Liferay Maven Support which will eventually have it’s own release cycle completely independent of the portals release cycle.  

All the archetypes will now have a separate property for Liferay Maven Plugin version called liferay.maven.plugin.version. The plugin will also now require you to tell which portal version you are developing against and you’ll do that by providing it liferayVersion in the configuration section. Here’s a example from liferay-theme-archetype:



<plugin>
<groupId>com.liferay.maven.plugins</groupId>
<artifactId>liferay-maven-plugin</artifactId>
<version>${liferay.maven.plugin.version}</version>
<executions>
<execution>
<phase>generate-sources</phase>
<goals>
<goal>theme-merge</goal>
<goal>build-css</goal>
<goal>build-thumbnail</goal>
</goals>
</execution>
</executions>
<configuration>
<autoDeployDir>${liferay.auto.deploy.dir}</autoDeployDir>
<appServerDeployDir>${liferay.app.server.deploy.dir}</appServerDeployDir>
<appServerLibGlobalDir>${liferay.app.server.lib.global.dir}</appServerLibGlobalDir>
<appServerPortalDir>${liferay.app.server.portal.dir}</appServerPortalDir>
<liferayVersion>${liferay.version}</liferayVersion>
<parentTheme>${liferay.theme.parent}</parentTheme>
<pluginType>theme</pluginType>
<themeType>${liferay.theme.type}</themeType>
</configuration></plugin>

Please remember that the plugin will still be affected any bugs in the Liferay Portal Version so if you have patches installed you should point the plugin to a patched portal bundle by setting the liferay.app.server.xxx properties. If you discover any bugs in any of the plugin mojos please report them to our MAVEN Jira project.

This post was originally published on Liferay blog.

]]>
266
Installing MariaDB on Ubuntu https://jguru.fi/installing-mariadb-on-ubuntu.html?utm_source=rss&utm_medium=rss&utm_campaign=installing-mariadb-on-ubuntu https://jguru.fi/installing-mariadb-on-ubuntu.html#respond Sun, 19 Aug 2012 08:33:56 +0000 http://javaguru.fi/?p=11 I’ve been using MariaDB for some time now and it’s perfect replacement for MySQL especially with the latest news onOracle’s move to hinder MySQL developer community despite it’s promise to EU. Now is a perfect time to ditch MySQL and move to something that’s backed by the original authors of MySQL and that something is MariaDB.

1. First pick your Ubuntu version repository mirror close to you from MariaDB downloads page. Once you’ve picked up your mirror then add them to /etc/apt/source.list.d/mariadb.list. I’m still running 10.04 so here’s what I put in my mariadb.list:

# MariaDB repository list - created 2012-07-04 18:04 UTC
# http://downloads.mariadb.org/mariadb/repositories/
deb http://ftp.heanet.ie/mirrors/mariadb/repo/5.5/ubuntu lucid main
deb-src http://ftp.heanet.ie/mirrors/mariadb/repo/5.5/ubuntu lucid main

2. Next you’ll need to import the signing key

sudo apt-key adv --recv-keys --keyserver keyserver.ubuntu.com 0xcbcb082a1bb943db

3. Update

aptitude update

4. Install

aptitude install mariadb-server-5.5

Now you have MariaDB 5.5 installed and you can configure it exactly like you would configure MySQL.

]]>
https://jguru.fi/installing-mariadb-on-ubuntu.html/feed 0 11
Liferay 6.1 GA2 Maven release https://jguru.fi/liferay-6-1-ga2-maven-release.html?utm_source=rss&utm_medium=rss&utm_campaign=liferay-6-1-ga2-maven-release Wed, 08 Aug 2012 19:38:08 +0000 https://web.liferay.com/web/mika.koivisto/blog/-/blogs/liferay-6-1-ga2-maven-release I’m glad to announce that we have release maven artifacts for Liferay 6.1 GA2 for both EE and CE. The CE version of portal artficats are in currently in Sonatype’s repository waiting to be synced to Central and EE artifacts are available for download in customer portal like before. We’ve also released the Liferay Maven Support project that is the plugins sdk equivalent for Maven.  Both CE and EE compatible versions are being synced to Central. Please remember that this is not supported through your portal support contract. If you find any bugs in the Maven plugin or archetypes please file them to the MAVEN Jira project. The CE GA2 version number is 6.1.1 and EE GA2 version number is 6.1.20. Remember to use a version corresponding to your running portal version as mixing versions might cause problems.

We’ve also added some new features and improvements.

New features

  • DBBuilder – build-db goal allows you to execute the DBBuilder to generate SQL files
  • SassToCSSBuilder – build-css goal precompiles SASS in your css and this goal has been added to theme archetype
  • JSF Portlet Archetype
  • ICEFaces Portlet Archetype
  • PrimeFaces Portlet Archetype
  • Liferay Faces Alloy Portlet Archetype

Improvements

  • Allow setting service build number and turn off auto increment for ServiceBuilder.
  • Allow build-service and direct-deploy from the parent project for Service builder and Ext projects.

 

This post was originally published on Liferay blog.

]]>
268
Tips for securing your Liferay installation https://jguru.fi/tips-for-securing-your-liferay-installation.html?utm_source=rss&utm_medium=rss&utm_campaign=tips-for-securing-your-liferay-installation https://jguru.fi/tips-for-securing-your-liferay-installation.html#respond Sun, 05 Aug 2012 19:07:02 +0000 http://javaguru.fi/?p=9 There’s few security related things that I see people constantly doing wrong. The very first thing is assuming Liferay bundle with it’s default settings is secure for production. It is far from secure. Don’t get me wrong this doesn’t mean that Liferay isn’t secure it just means that shouldn’t deploy Liferay with it’s default settings and assume it’s secure. So let’s go over some things you should consider.

Default admin user

Everyone knows the default admin user [email protected] and some attacks have taken advantage knowing this user and even it’s userid which is predictable. What I would suggest is not only to change the email address and screenname of this user but actually create a completely new admin user and remove this user.

Portal instance web id

The default company web id is liferay.com and it goes without saying you should change it unless you are actually deploying liferay.com. You can do this simply by setting company.default.web.id property in your portal-ext.properties. This must be done before you start your portal and let it generate the database.

Encryption algorithm

By default Liferay is configured to use 56bit DES encryption algorithm. I believe this legacy is due to US encryption export laws. The problem with 56bit DES is that it was cracked back in the 90s and is not considered secure encryption anymore. Liferay encrypts certaing things with this like your password in Remember Me cookie. If someone get’s a hold of that cookie they can crack your password. I would recommend using at least 128bit AES. To do that you’ll just need to set following properties before starting your portal against a clean database.

company.encryption.algorithm=AES
company.encryption.key.size=128

Password hashing

Recently there has been a lot of sites that have their passwords being compromised because they weren’t using salt with their password hash. Liferay by default uses SHA-1 to hash your password. That hash is a one way algorithm that doesn’t allow reversing the password from the hash but if someone gets a hold of your password hash it’s still possible to crack with brute force or by using rainbow tables. Rainbow tables are precalculated hashes that allow very easily and fast find unsalted passwords. Salt is something we add to the password before hashing it and it’s preferrable unique of each password so that even if two users have the same password their hash is different. Liferay comes with SSHA algorithm that salts the password before calculating the SHA-1 hash from it. You can enable it by setting following in your portal-ext.properties

password.encryption.algorithm=SSHA

Unused SSO hooks

The default Liferay bundle comes with all SSO hooks included even thought they are not all enabled it’s a good idea to remove any hooks your are not using. There’s a property called auto.login.hooks and you should remove all hooks your are not using. Also remember to disable their associated filters.

Unused Remote APIs

Liferay has several different remote APIs such as JSON, JSONWS, Web service, Atom, WebDAV, Sharepoint etc. You should go through them and disable everything your site is not using. Please note that some Liferay builtin portlets rely on some of these APIs. All the APIs are accessible under /api URL.

Mixed HTTP and HTTPS

Everyone should by now know about Firesheep a firefox extension that allows an attacker to sniff a wifi network they are connected to and hijack a users authenticated session. This attack can compromise any website that doesn’t use all authenticated traffic over https. If you use https for just part of the site and your users can access rest of the site as authenticated user over http then your are vulnerable to Firesheep attack. This is especially bad with Liferay if you are using the default encryption and you use Remember me functionality because then the attacker could even compromise your password and use it login to any system where you use the same password. I’m sad to say that even Liferay.com is vulnerable to this attack.

Shared Secrets

Don’t forget to change any shared secrets. The auth.token.shared.secret has a default value that you want to change so that no-one can try to exploit it. This tip came from Jelmer who has found many vulnerabilities in Liferay.  Another one you don’t want to overlook is auth.mac.shared.key which has default value of blank. That one is relevant if you auth.mac.allowset to true.

This is not an exhaustive list but this should make your Liferay installation much more secure than it is by default. For more tips on what to configure before going to production check out Liferay whitepapers. You should especially read the deployment checklist. If you can think of any other things that should be on this list comment them or tweet them to me @koivimik

Update: Added shared secret tip from Jelmer

]]>
https://jguru.fi/tips-for-securing-your-liferay-installation.html/feed 0 9
Monitoring Liferay with Nagios, Jolokia and JMX4Perl https://jguru.fi/monitoring-liferay-with-nagios-jolokia-and-jmx4perl.html?utm_source=rss&utm_medium=rss&utm_campaign=monitoring-liferay-with-nagios-jolokia-and-jmx4perl https://jguru.fi/monitoring-liferay-with-nagios-jolokia-and-jmx4perl.html#comments Sun, 29 Jul 2012 19:15:04 +0000 http://javaguru.fi/?p=7 How do I monitor Liferay? That’s a question I’ve heard a lot lately. Well the standard way of getting some information about the application is by using JMX. The downside of JMX is that it’s a Java only standard and the only remote connection is by using RMI which doesn’t really sit well with non Java monitoring software like very popular Nagios. Another hurdle might be that your network admin might not be inclined to open up RMI access to the jvm.

There’s a nice agent called Jolokia that can provide a http bridge to JMX. You can install it as java agent in pretty much any java app or deploy it as a webapp. With Jolokia installed you can query any MBeans for their values using a simple http GET and get the data as JSON objects. JMX4Perl is a perl module and scripts that provide a easy way to run those queries through Jolokia. One of those scripts is check_jmx4perl which can be used in nagios service checks.

Okay so now we know that we are going to need Nagios, Jolokia and JMX4Perl to monitor the Liferay JVM but what should we monitor? Well that depends on what information you are interested in but at minimum I would monitor ajp or http thread usage as well as heap utilization. Just by monitoring those values you’ll know when your JVM is becomes unresponsive and can also get some early warning that there’s issues for example heap usage goes over warning threashold and never returns to normal or keeps constantly going over the threshold which could indicate they you don’t have enough heap allocated.

I’m going to assume that you have  nagios installed and configured and I will only go through how to install Jolokia and configure some checks for threads and heap. So let’s start by installing JMX4Perl.

Installing JMX4Perl is pretty simple with cpan. You just launch cpan command line client and install it like this:

cpan> install JMX::Jmx4Perl

Next you’ll need to download Jolokia and deploy the jolokia.war to your app server. For this example I’m going to assume that you are using Tomcat 7. Once you’ve deployed Jolokia it’s usually good idea to restrict who can query it. For this example we are just going to restrict it to a certain IP address (the Nagios server) and limit it to read operations only. Since I don’t like modifying the war we are going to tell Jolokia where to find the policy file through a context parameter. Create a jolokia.xml in tomcat/conf/Catalina/localhost with following content:

<Context path="/jolokia">
        <Parameter name="policyLocation" value="file:///etc/jolokia/jolokia-access.xml" />
</Context>

That tells Jolokia to look for the policy file jolokia-access.xml from /etc/jolokia/jolokia-access.xml. This is great when you are running multiple tomcats in the same server and want them to share the jolokia policy file.

Now go ahead and create the jolokia-access.xml in /etc/jolokia

<?xml version="1.0" encoding="utf-8"?>
<restrict>
        <remote>
                <host>[YOUR NAGIOS SERVER IP]</host>
        </remote>
        <http>
                <method>get</method>
                <method>post</method>
        </http>
        <commands>
                <command>read</command>
        </commands>
</restrict>

Next we need to create configuration for jmx4perl. In /etc/jmx4perl/jmx4perl.cfg we are going to include some preconfigured checks extend them. Tomcat 7 you need to add quotes around the thread pool name. We also need to set warning and critical levels for alerts. You’ll also need to add a Server for each tomcat you want to monitor.

# Default definitions
include default/memory.cfg
include default/tomcat.cfg

# ==========================
# Check definitions

<Check tc7_connector_threads>
	Use = relative_base($1,$2)
	Label = Connector $0 : $BASE
	Value = Catalina:name="$0",type=ThreadPool/currentThreadCount
	Base = Catalina:name="$0",type=ThreadPool/maxThreads
	Critical 95
	Warning 90
</Check>

<Check j4p_memory_heap>
	Use memory_heap
	Critical 95
	Warning 90
</Check>

<Server tomcat>
	Url http://MY_TOMCAT_HOSTNAME:8080/jolokia
</Server>

Then in /etc/nagios3/commands.cfg we’ll need to add a check command for jmx4perl and we’ll use the check_jmx4perl script to do that.

define command {
	command_name    check_j4p_cmd
	command_line    /usr/local/bin/check_jmx4perl --unknown-is-critical --config /etc/jmx4perl/jmx4perl.cfg --server $ARG1$ --check $ARG2$ $ARG3$
}

Then we need to define a service to monitor in /etc/nagios3/conf.d/host-MY_TOMCAT_HOSTNAME.cfg

define service {
	use generic-service
	host_name MY_TOMCAT_HOSTNAME
	service_description Tomcat Heap Memory
	check_command check_j4p_cmd!tomcat!j4p_memory_heap!x
}

define service {
	use generic-service
	host_name MY_TOMCAT_HOSTNAME
	service_description Tomcat AJP Threads
	check_command check_j4p_cmd!tomcat!tc7_connector_threads!ajp-bio-8009
}

The check above is for your tomcat heap and the other one is for Tomcat 7 AJP threads.

Now you should all the pieces to implement your own monitoring using Nagios, Jolokia and JMX4Perl. You should also remember that you can apply this to any JEE application not just Liferay.

]]>
https://jguru.fi/monitoring-liferay-with-nagios-jolokia-and-jmx4perl.html/feed 3 7
Configuring c3p0 connection pool for Liferay on Tomcat https://jguru.fi/configuring-c3p0-connection-pool-for-liferay-on-tomcat.html?utm_source=rss&utm_medium=rss&utm_campaign=configuring-c3p0-connection-pool-for-liferay-on-tomcat https://jguru.fi/configuring-c3p0-connection-pool-for-liferay-on-tomcat.html#respond Wed, 18 Jul 2012 23:52:14 +0000 http://javaguru.fi/?p=5 There’s several ways you could configure a connection pool for Liferay on Tomcat but the way I’m going to show is the JEE way and the only one I consider correct.

The first thing is to copy or move the c3p0.jar from webapps/ROOT/WEB-INF/lib/ to lib/. Also make sure you have your dabase driver there. In this example it would be mysql.jar.

Then we need to tell Liferay that you want to use a connection pool from JNDI and this you can do by adding following line to your portal-ext.properties which can be placed in Liferay Home directory (the directory above tomcat).

jdbc.default.jndi.name=jdbc/LiferayPool

Add following snippet to conf/server.xml inside GlobalNamingResources. Adjust the pool size and idle time and connection test period according to your environment. They are particularly important when you have a firewall between your Liferay and database or when the database server drops connections after certain idle period.

<Resource
    name="jdbc/LiferayPool"
    auth="Container"
    type="com.mchange.v2.c3p0.ComboPooledDataSource"
    factory="org.apache.naming.factory.BeanFactory"
    driverClass="com.mysql.jdbc.Driver"
    jdbcUrl="jdbc:mysql://localhost/lportaluseUnicode=true&amp;characterEncoding=UTF-8&amp;useFastDateParsing=false"
    user="lportal"
    password="test"
    minPoolSize="10"
    maxPoolSize="20"
    maxIdleTime="600"
    preferredTestQuery="select 1 from dual"
    idleConnectionTestPeriod="180"
    numHelperThreads="5"
    maxStatementsPerConnection="100"
/>

Now we need to link the jdbc/LiferayPool name defined in portal-ext.properties to the jdbc/LiferayPool defined in server.xml and this definition goes to conf/Catalina/localhost/ROOT.xml

<ResourceLink name="jdbc/LiferayPool" global="jdbc/LiferayPool" type="javax.sql.DataSource"/>

Now we are done and you can start your tomcat with the new connection pool. Note you can follow similar process to configure MailSession from JNDI.

]]>
https://jguru.fi/configuring-c3p0-connection-pool-for-liferay-on-tomcat.html/feed 0 5
Why is my java process taking more memory than I gave it? https://jguru.fi/why-is-my-java-process-taking-more-memory-than-i-gave-it.html?utm_source=rss&utm_medium=rss&utm_campaign=why-is-my-java-process-taking-more-memory-than-i-gave-it https://jguru.fi/why-is-my-java-process-taking-more-memory-than-i-gave-it.html#respond Wed, 06 Jun 2012 17:31:05 +0000 http://javaguru.fi/?p=18 It seems to be quite common misconseption that the memory you give to java process with -Xmx and -Xms command line arguments is the amount of memory the process will consume but in fact that is only the amount of memory your java object heap will have. The heap is just one factor in how much memory the java process will consume. To better understand how much memory your java application will consume from the system you need to understand all the factors that account for the memory usage. Those factors are:

  • Objects
  • Classes
  • Threads
  • Native data structures
  • Native code

The memory consumption associated with each item varies across applications, runtime environments and platforms.  So how do you calculate the total memory? Well, it’s not really all that easy to get accurate number because you have little control over the native part. The only parts you can really control is the amount of heap -Xmx, memory consumed by classes -XX:MaxPermSize and thread stack -Xss which controls the amount of memory each thread takes. Be careful when adjusting stack size as too low size will cause StackOverflow exceptions and your application won’t work correctly. So the formula is:

(-Xmx) + (-XX:MaxPermSize) + numberofthreads * (-Xss) + Other mem

The other mem part depends on how much native code is used like NIO, socket buffers, JNI etc. It’s anywhere from 5% of total jvm memory and up. So assume we have following JVM arguments and 100 threads

-Xmx1024m -XX:MaxPermSize=256m -Xss512k

That would mean that the jvm process would take at least: 1024m + 256m + 100*512k + (0.05 * 1330m) = 1396.5m.

I usually use a quick approximation rule of 1.5 * max heap to be the minimum amont of RAM a tomcat process will require. This can be higher if you have large application that requires you to increase MaxPermSize beyond 256m. If you use this to size how much memory your system will require remember that you need to leave memory for the OS and other applications running on the system otherwise you might end up using a lot of virtual memory that will affect your application performance negatively.

]]>
https://jguru.fi/why-is-my-java-process-taking-more-memory-than-i-gave-it.html/feed 0 18
What to Do When You Get “Error listerStart” with Tomcat https://jguru.fi/what-to-do-when-you-get-error-listerstart-with-tomcat.html?utm_source=rss&utm_medium=rss&utm_campaign=what-to-do-when-you-get-error-listerstart-with-tomcat https://jguru.fi/what-to-do-when-you-get-error-listerstart-with-tomcat.html#comments Tue, 29 May 2012 13:58:41 +0000 http://javaguru.fi/?p=20 I’m sure many people other than me have banged their head in the wall trying to figure out an error like this:

SEVERE: Error listenerStart 
26-May-2012 13:44:27 org.apache.catalina.core.StandardContext startInternal 
SEVERE: Context [] startup failed due to previous errors

That basically means that tomcat failed to start the webapp because there was an error with some listener, quite often Spring context listener. The really annoying part is that it doesn’t actually show you what went wrong. There’s actually pretty simple way to get tomcat to log the actual error. You just need to create a logging.properties in WEB-INF/classes of the failing webapp and add following lines to it:

org.apache.catalina.core.ContainerBase.[Catalina].level = INFO
org.apache.catalina.core.ContainerBase.[Catalina].handlers = java.util.logging.ConsoleHandler

Then just reload the webapp to see the error in tomcat console log. I hope this tip saves you a lot of hasle from trying to figure out the root cause of the problem.

]]>
https://jguru.fi/what-to-do-when-you-get-error-listerstart-with-tomcat.html/feed 1 20
How to Create a Consistent Liferay Backup https://jguru.fi/how-to-create-a-consistent-liferay-backup.html?utm_source=rss&utm_medium=rss&utm_campaign=how-to-create-a-consistent-liferay-backup https://jguru.fi/how-to-create-a-consistent-liferay-backup.html#respond Mon, 28 May 2012 16:30:32 +0000 http://javaguru.fi/?p=22 This is a question I’ve gotten asked in nearly all the Liferay System Administrator trainings I’ve given. Most people will just backup their database and Liferay data directory separately but any competent system admin will tell you that it’s not guaranteed to be consistent because someone could upload or delete files between the time you took the database dump and the time you copied the data directory. Now I’m assuming that you are storing your document library binaries to filesystem instead of database.

Now to achieve a consistent backup with minimal interruption to your portal what you need to do is get a read lock on all your Liferay tables. This will prevent writes to the database. Then you dump the database to file with a tool like mysqldump and then you take a quick snapshot of the filesystem before you unlock the tables. You need to keep the connection that locked the tables open until this whole process is done. Once you have the database dump and filesystem snapshot ready only then you can release the lock and then you can backup the data directory using what ever method you would normally use.

For the PoC I’m using MySQL and my filesystem is on Linux LVM volume which supports taking snapshots. I’ve written a Perl script to execute all the commands. I’m sharing the script under GPL and it’s available in Github. Feel free to fork it and modify it to suit your needs and if you have good ideas send me a pull request.

The way the script works is you pass in bunch of parameters like database details, lvm volume location, source and target directories. Here’s an example:

backup_liferay.pl -u dba-backup -p mypassword -d lportal -h localhost \
--lvm-volume-path /dev/vg0/opt --lvm-snapshot-volume-path /dev/vg0/opt-snapshot \
--lvm-snapshot-volume-name opt-snapshot --lvm-snapshot-volume-size 50G \
--snapshot-mount-path /backups/snapshot \
--source-path /liferay-portal-6.1.0/data/document_library \
--db-target-path /backups/mysql/lportal.sql.gz \
--data-target-path /backups/liferay --compress

Now even if that doesn’t exactly match your system I hope it gives you an idea how to roll your own Liferay backup.

]]>
https://jguru.fi/how-to-create-a-consistent-liferay-backup.html/feed 0 22
Debugging Maven Plugins https://jguru.fi/debugging-maven-plugins.html?utm_source=rss&utm_medium=rss&utm_campaign=debugging-maven-plugins https://jguru.fi/debugging-maven-plugins.html#respond Sun, 27 May 2012 08:49:52 +0000 http://javaguru.fi/?p=24 When developing maven plugins things don’t always work the way you expect so you need to debug the Mojo to see what’s really going on. I had a weird case where my plugin worked when I ran it independently but when I ran it with mvn clean package it always failed. First thing you can do is run in debug mode which produces a lot more output and shows all the plugin execution configuration. You can enable it with -X argument like this:

mvn -X clean package

Now that didn’t quite help with my case so next thing I did was to run it with remote debugger. That way I could step through the code line by line and inspect all the variables. To do that you just modify the MAVEN_OPTS environment variable in the shell where you are executing you maven plugin and add java debugger agentlib config like this:

MAVEN_OPTS="-agentlib:jdwp=transport=dt_socket,address=8000,server=y,suspend=y"

I used suspend=y so that it would wait for my debugger to attach before continuing the execution. Then you just add some breakpoints in you IDE and remote debug it like any java application. That by the way solved my issue as I realized each of my Liferay maven plugins were initialing Liferay configuration but since they were all run after each other in the same context only the first one mattered.

]]>
https://jguru.fi/debugging-maven-plugins.html/feed 0 24
Creating Liferay Themes with Maven https://jguru.fi/creating-liferay-themes-maven.html?utm_source=rss&utm_medium=rss&utm_campaign=creating-liferay-themes-maven Fri, 16 Mar 2012 00:37:18 +0000 https://web.liferay.com/web/mika.koivisto/blog/-/blogs/creating-liferay-themes-with-maven Some time ago I posted on how you can get started creating portlets with Liferay Maven SDK now I’m going to show how you can add themes to your project. If you need a refresher on how to get started check out this post.

1) Open command prompt or terminal and go to your project directory. Next we are going to create a theme using the Liferay theme template. Run:

mvn archetype:generate
    -DarchetypeArtifactId=liferay-theme-archetype
    -DarchetypeGroupId=com.liferay.maven.archetypes
    -DarchetypeVersion=6.1.0
    -DartifactId=sample-theme
    -DgroupId=com.liferay.sample
    -Dversion=1.0-SNAPSHOT
For 6.1 EE ga1 use -DarchetypeVersion=6.1.10. 

Now you have your theme project in sample-theme directory with following structure.

sample-theme
sample-theme/pom.xml
sample-theme/src
sample-theme/src/main
sample-theme/src/main/resources
sample-theme/src/main/webapp
sample-theme/src/main/webapp/WEB-INF
sample-theme/src/main/webapp/WEB-INF/liferay-plugin-package.properties
sample-theme/src/main/webapp/WEB-INF/web.xml

2) Open the theme pom.xml file. From the properties section remove liferay.version and liferay.auto.deploy.dir properties. These properties should be defined in the pom.xml in your project root just as we did with the portlet project.

You should also note that there’s two additional properties liferay.theme.parent and liferay.theme.type. These set the parent theme and the theme template language just like in ant based plugins sdk. The property liferay.theme.parent however allows you to define basically any war artifact as the parent. The syntax is groupId:artifactId:version or you can use the core themes: _unstyled, _styled, classic and control_panel.

3) Now you can add your customizations in src/main/webapp. Just follow the same structure as you would do in _diffs. So your custom.css would go to src/main/webapp/css/custom.css.

4) Once you’ve done your customizations and want to create the war file just run

mvn package

It will create the war file just like with any maven war type project. Another thing it will do is download and copy your parent theme and then overlay your changes on top of it. It will also create a thumbnail from src/main/webapp/images/screenshot.png just like ant based plugins sdk does. These are accomplished by adding the theme-merge and build-thumbnail goals into the generate-sources phase.

5) Now deploy the theme into your Liferay bundle by running:

mvn liferay:deploy
]]>
.blog-snippet {
border: 1px solid #DEDEDE;
font-family: Monaco, "Courier New", "DejaVu Sans Mono", "Bitstream Vera Sans Mono", monospace;
margin-bottom: 1em;
overflow: auto;
word-wrap: normal;
background-color: ghostWhite;
white-space:pre-line;
}

Some time ago I posted on how you can get started creating portlets with Liferay Maven SDK now I’m going to show how you can add themes to your project. If you need a refresher on how to get started check out this post.

1) Open command prompt or terminal and go to your project directory. Next we are going to create a theme using the Liferay theme template. Run:

mvn archetype:generate
-DarchetypeArtifactId=liferay-theme-archetype
-DarchetypeGroupId=com.liferay.maven.archetypes
-DarchetypeVersion=6.1.0
-DartifactId=sample-theme
-DgroupId=com.liferay.sample
-Dversion=1.0-SNAPSHOT
For 6.1 EE ga1 use -DarchetypeVersion=6.1.10.

Now you have your theme project in sample-theme directory with following structure.

sample-theme
sample-theme/pom.xml
sample-theme/src
sample-theme/src/main
sample-theme/src/main/resources
sample-theme/src/main/webapp
sample-theme/src/main/webapp/WEB-INF
sample-theme/src/main/webapp/WEB-INF/liferay-plugin-package.properties
sample-theme/src/main/webapp/WEB-INF/web.xml

2) Open the theme pom.xml file. From the properties section remove liferay.version and liferay.auto.deploy.dir properties. These properties should be defined in the pom.xml in your project root just as we did with the portlet project.

You should also note that there’s two additional properties liferay.theme.parent and liferay.theme.type. These set the parent theme and the theme template language just like in ant based plugins sdk. The property liferay.theme.parent however allows you to define basically any war artifact as the parent. The syntax is groupId:artifactId:version or you can use the core themes: _unstyled, _styled, classic and control_panel.

3) Now you can add your customizations in src/main/webapp. Just follow the same structure as you would do in _diffs. So your custom.css would go to src/main/webapp/css/custom.css.

4) Once you’ve done your customizations and want to create the war file just run

mvn package

It will create the war file just like with any maven war type project. Another thing it will do is download and copy your parent theme and then overlay your changes on top of it. It will also create a thumbnail from src/main/webapp/images/screenshot.png just like ant based plugins sdk does. These are accomplished by adding the theme-merge and build-thumbnail goals into the generate-sources phase.

5) Now deploy the theme into your Liferay bundle by running:

mvn liferay:deploy

This post was originally published on Liferay blog.

]]>
270
Getting started with Liferay SAML 2.0 Identity Provider https://jguru.fi/getting-started-liferay-saml-2-0-identity-provider.html?utm_source=rss&utm_medium=rss&utm_campaign=getting-started-liferay-saml-2-0-identity-provider Tue, 28 Feb 2012 01:27:21 +0000 https://web.liferay.com/web/mika.koivisto/blog/-/blogs/getting-started-with-liferay-saml-2-0-identity-provider Liferay 6.1 EE comes with SAML 2.0 Identity Provider and Service

Provider support via SAML plugin. If you are not familiar with SAML
check out my Introduction to SAML presentation slides.

In this post we will configure Liferay to be SAML Identity Provider
and configure Salesforce to be a Service Provider. After we are done
we have a user that can move from Liferay to Salesforce without
requiring to authenticate on Salesforce.

You’ll need following things to complete this by yourself:

* Liferay Portal 6.1 EE GA1 Tomcat bundle
* SAML Portlet WAR
* Salesforce developer account. You can sign-up here for free.

The first thing to do is download and install Liferay. If you need
help configuring Liferay refer to Liferay 6.1 User
Guide
. Once that is done you’ll need to configure the SAML
identity provider before deploying the plugin. The IdP needs a private
and public key pair for signing SAML messages. It uses Java keystore
to store the them. We’ll create the keystore and they key pair
using keytool that is part of the JDK. You need to pick a unique
entity id for your IdP and a password that is used to protect keystore
and the private key. In this example we’ll use
liferaysamlidpdemo as the entity id and liferay as the password for
both keystore and the key. The keystore is created in
LIFERAY_HOME/data/keystore.jks as this is the default location SAML
plugin will look for it. You can also configure the location and type
of they keystore and will do it here just for reference.

keytool -genkeypair -alias liferaysamlidpdemo -keyalg RSA -keysize
2048 -keypass liferay -storepass liferay -keystore data/keystore.jks

You’ll be asked to provide some information that will be in
the certificate with the public key.

What is your first and last name?
[Unknown]:
Liferay SAML IdP Demo
What is the name of your
organization?
[Unknown]:  Liferay SAML IdP
Demo
What is the name of your City or Locality?[Unknown]:
What is the name of your State or
Province?
[Unknown]:
What is the two-letter
country code for this unit?
[Unknown]:
Is
CN=Liferay SAML IdP Demo, OU=Unknown, O=Liferay SAML IdP Demo,
L=Unknown, ST=Unknown, C=Unknown correct?
[no]:  yes

Next step is to add SAML configuration to your portal-ext.properties.

saml.enabled=true
saml.role=idp
saml.entity.id=liferaysamlidpdemo
saml.require.ssl=false
saml.sign.metadata=truesaml.idp.authn.request.signature.required=truesaml.keystore.path=${liferay.home}/data/keystore.jks
saml.keystore.password=liferay
saml.keystore.type=jkssaml.keystore.credential.password[liferaysamlidpdemo]=liferay

Now you can deploy SAML plugin by copying it to LIFERAY_HOME/deploy
and starting up tomcat. Wait for the saml-portlet to be deployed and
available and then open http://localhost:8080/c/portal/saml/metadata.
If you have configured everything correctly you should see the IdP
metadata similar to below. I’ve just shortened the data on
signature and certificate elements.

<?xml version=“1.0”
encoding=“UTF-8”?>
<md:EntityDescriptor
xmlns:md=“urn:oasis:names:tc:SAML:2.0:metadata”
entityID=“liferaysamlidpdemo”>
<ds:Signature
xmlns:ds=“http://www.w3.org/2000/09/xmldsig#”>
<ds:SignedInfo><ds:CanonicalizationMethod
Algorithm=“http://www.w3.org/2001/10/xml-exc-c14n#”/>
<ds:SignatureMethod
Algorithm=“http://www.w3.org/2000/09/xmldsig#rsa-sha1”/>
<ds:Reference URI=“”>
<ds:Transforms>
<ds:Transform
Algorithm=“http://www.w3.org/2000/09/xmldsig#enveloped-signature”/>
<ds:Transform
Algorithm=“http://www.w3.org/2001/10/xml-exc-c14n#”/>
</ds:Transforms><ds:DigestMethod
Algorithm=“http://www.w3.org/2000/09/xmldsig#sha1”/>

<ds:DigestValue>mVKz/Tv6o40+SrEF595+Gedmoo8=</ds:DigestValue>
</ds:Reference>

</ds:SignedInfo>

<ds:SignatureValue>AAJsDF8dJv5XQw6Ty1MSg7 …
OXvQw==</ds:SignatureValue>

<ds:KeyInfo>
<ds:X509Data>
<ds:X509Certificate>MIIDjjCCAnagAwIB…
</ds:X509Certificate>

</ds:X509Data>
</ds:KeyInfo>
</ds:Signature>
<md:IDPSSODescriptor
ID=“liferaysamlidpdemo”

WantAuthnRequestsSigned=“true”
protocolSupportEnumeration=“urn:oasis:names:tc:SAML:2.0:protocol”>
<md:KeyDescriptor use=“signing”>
<ds:KeyInfo
xmlns:ds=“http://www.w3.org/2000/09/xmldsig#”>
<ds:X509Data>

<ds:X509Certificate>MIIDjj
…</ds:X509Certificate>

</ds:X509Data>
</ds:KeyInfo>
</md:KeyDescriptor>

<md:SingleLogoutService
Binding=“urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect”

Location=“http://localhost:8080/c/portal/saml/slo_redirect”/>
<md:SingleSignOnService
Binding=“urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect”

Location=“http://localhost:8080/c/portal/saml/sso”/>
<md:SingleSignOnService
Binding=“urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST”

Location=“http://localhost:8080/c/portal/saml/sso”/>
</md:IDPSSODescriptor>
</md:EntityDescriptor>

Even though the IdP is configured and functioning it’s not
very useful because there’s no Service Providers configured. For
this example we are going to use Salesforce developer account to
demonstrate single sign-on between Liferay and Salesforce. If you
haven’t already signed up for Salesforce developer account do it here.

We’ll need to export the certificate from keystore because
Salesforce doesn’t know how to read SAML metadata.

keytool -export -alias liferaysamlidpdemo -file liferaysamlidpdemo.crt
-keystore data/keystore.jks -storepass liferay -keypass liferay

Now login to your Salesforce developer account in here. On your
dashboard click on Setup.

 

Then click on Security Controls > Single Sign-On
Settings under Administration Setup.

Then click on Edit.

Here’s the setting you need:

* SAML Enabled.
* SAML Version: 2.0
* Issuer:
liferaysamlidpdemo (this is the entity id of the IdP)
*
Identity Provider Certificate: liferaysamlidpdemo.crt which you
exported earlier.
* Identity Provider Login URL:
http://localhost:8080/c/portal/saml/sso
* SAML User ID Type:
Select Assetion contains User’s salesforce.com username
*
SAML User ID Location: Select User ID is in the NameIdentifier element
of the Subject statement
* Identity Provider Logout URL:
http://localhost:8080/c/portal/logout (Salesforce does not support
SAML Single Logout Profile)

 

Verify that your setting as correct and then click on
Download Metadata. Also note the Entity Id as this will be needed on
the IdP side.

Move the downloaded metadata xml to
LIFERAY_HOME/data/saml/salesforce-metadata.xml. Now we need to
configure the IdP to know about this Service Provider. This is done by
telling saml plugin where to find the SAML metadata for Salesforce.

saml.metadata.paths=${liferay.home}/data/saml/salesforce-metadata.xml

If your Salesforce Entity Id is not https://saml.salesforce.com
you’ll also need to add following lines to your
portal-ext.properties. Note I’m using
https://saml.salesforce.com as the entity id but you would replace it
with what ever Salesforce reported it to be.

saml.idp.metadata.attributes.enabled[https://saml.salesforce.com]=true
saml.idp.metadata.attribute.names[https://saml.salesforce.com]=
saml.idp.metadata.name.id.format[https://saml.salesforce.com]=urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified
saml.idp.metadata.salesforce.attributes.enabled[https://saml.salesforce.com]=true

If you had your tomcat still running just restart it so that the
new property value is read. Then login as [email protected] / test. Now
click on Manage > Site Pages. Click on Add Page. Add following values:

Name: Salesforce
Type: URL
URL:  /c/portal/saml/sso?entityId=https://saml.salesforce.com

Notice the entityId is the same Entity Id that was shown as entity
id on the Salesforce Single Sign-On configuration page.

Go to Control Panel and add a new user with same email address as
your Salesforce developer account. Sign out and login with that new
account. Now click on the Salesforce page link. If everything was
configured correctly you are redirected to Salesforce and you are
signed in with your developer account. If you want to be redirected to
some other page than they home page you can add a URL parameter
RelayState with the page URL you want to be redirected to as the
value. For example the URL could look like this
/c/portal/saml/sso?entityId=https://saml.salesforce.com&RelayState=/006/o.
This would take me to my Opportunities page directly.

Now sign out from Salesforce and you will be taken back to Liferay
and logged out from Liferay. Now if you click on the Salesforce page
it will take present you with Liferay login page and after login will
take you to Salesforce.

Update: If you need to setup Liferay as SP check
out my collegues post Setting up Liferay as Service Provider.

This post was originally published on Liferay blog.

]]>
272
Deploying Liferay artifacts to your own maven repository https://jguru.fi/deploying-liferay-artifacts-maven-repository.html?utm_source=rss&utm_medium=rss&utm_campaign=deploying-liferay-artifacts-maven-repository Tue, 21 Feb 2012 22:12:38 +0000 https://web.liferay.com/web/mika.koivisto/blog/-/blogs/deploying-liferay-artifacts-to-your-own-maven-repository As part of Liferay 6.1 release we’ve created a new package that has a convenient  script to install Liferay artifacts to your local repository or to a remote repository. This package is provide for both CE and EE releases but it is more useful for EE users because we don’t release EE versions of the artifacts to Maven Central repository.

You can download the 6.1 GA1 package from here and 6.1 EE users can download it from Customer Portal. Once you have downloaded the zip file unzip it. 

In the root of the package you’ll find build.properties. This file defines the remote repository location, repository id and optional gpg signing key and password. You can override settings in this file similarly to those in plugins sdk by creating a build.USERNAME.properties file and overriding the properties you want. If you are just deploying to you local repository there’s no need to override any settings. 

Before you begin you should make sure you have mvn in your path. For remote deployment you should also increase the available memory for maven otherwise you might get a OutOfMemoryError. For windows you can use following in your cmd prompt or set MAVEN_OPTS environment variable.

set MAVEN_OPTS=-Xmx512m -XX:MaxPermSize=128m

For Unix-like systems such as Linux and Mac OS X use

export MAVEN_OPTS="-Xmx512m -XX:MaxPermSize=128m"

To deploy to your local maven repository you can just run:

ant install 

To deploy to a remote repository such as Sonatype Nexus you need to set credential required to deploy to the repository in ${USER_HOME}/.m2/settings.xml like this:

<?xml version="1.0"?>
<settings>
    <servers>
        <server>
            <id>liferay</id>
            <username>admin</username>
            <password>password</password>
        </server>
    </servers>
</settings>

Then you need to add the repository id and repository location to your build.USERNAME.properties like this:

lp.maven.repository.id=liferay lp.maven.repository.url=http://localhost/nexus/content/repositories/liferay-release

Notice that the repository id must match the one in your settings.xml so that correct credentials are picked up. You can also set gpg.keyname and gpg.passphrase if you want the artifacts signed. Check out this blog post on how to generate gpg key and distribute the public key.

Now you can deploy it just by running:

ant deploy

Now you have following Liferay artifacts at your disposal. Their groupId is com.liferay.portal and artifactId is one listed below and version is the Liferay release number such as 6.1.0 for 6.1 GA1 and 6.1.10 for 6.1 EE1.

  • portal-client
  • portal-impl
  • portal-service
  • portal-web
  • support-tomcat
  • util-bridges
  • util-java
  • util-taglib
]]>
.blog-snippet {
border: 1px solid #DEDEDE;
font-family: Monaco, "Courier New", "DejaVu Sans Mono", "Bitstream Vera Sans Mono", monospace;
margin-bottom: 1em;
overflow: auto;
word-wrap: normal;
background-color: ghostWhite;
white-space:pre-line;
}

As part of Liferay 6.1 release we’ve created a new package that has a convenient  script to install Liferay artifacts to your local repository or to a remote repository. This package is provide for both CE and EE releases but it is more useful for EE users because we don’t release EE versions of the artifacts to Maven Central repository.

You can download the 6.1 GA1 package from here and 6.1 EE users can download it from Customer Portal. Once you have downloaded the zip file unzip it.

In the root of the package you’ll find build.properties. This file defines the remote repository location, repository id and optional gpg signing key and password. You can override settings in this file similarly to those in plugins sdk by creating a build.USERNAME.properties file and overriding the properties you want. If you are just deploying to you local repository there’s no need to override any settings.

Before you begin you should make sure you have mvn in your path. For remote deployment you should also increase the available memory for maven otherwise you might get a OutOfMemoryError. For windows you can use following in your cmd prompt or set MAVEN_OPTS environment variable.

set MAVEN_OPTS=-Xmx512m -XX:MaxPermSize=128m

For Unix-like systems such as Linux and Mac OS X use

export MAVEN_OPTS=”-Xmx512m -XX:MaxPermSize=128m”

To deploy to your local maven repository you can just run:

ant install

To deploy to a remote repository such as Sonatype Nexus you need to set credential required to deploy to the repository in ${USER_HOME}/.m2/settings.xml like this:

<?xml version=”1.0″?>
<settings>
<servers>
<server>
<id>liferay</id>
<username>admin</username>
<password>password</password>
</server>
</servers>
</settings>

Then you need to add the repository id and repository location to your build.USERNAME.properties like this:

lp.maven.repository.id=liferay
lp.maven.repository.url=http://localhost/nexus/content/repositories/liferay-release

Notice that the repository id must match the one in your settings.xml so that correct credentials are picked up. You can also set gpg.keyname and gpg.passphrase if you want the artifacts signed. Check out this blog post on how to generate gpg key and distribute the public key.

Now you can deploy it just by running:

ant deploy

Now you have following Liferay artifacts at your disposal. Their groupId is com.liferay.portal and artifactId is one listed below and version is the Liferay release number such as 6.1.0 for 6.1 GA1 and 6.1.10 for 6.1 EE1.

  • portal-client
  • portal-impl
  • portal-service
  • portal-web
  • support-tomcat
  • util-bridges
  • util-java
  • util-taglib

This post was originally published on Liferay blog.

]]>
274
Getting started with Liferay Maven SDK https://jguru.fi/getting-started-liferay-maven-sdk.html?utm_source=rss&utm_medium=rss&utm_campaign=getting-started-liferay-maven-sdk Thu, 02 Feb 2012 01:53:39 +0000 https://web.liferay.com/web/mika.koivisto/blog/-/blogs/getting-started-with-liferay-maven-sdk This will be the first in series of posts on how to develop Liferay plugins with Maven. In this post we’ll start by creating a new parent project for your plugins and add a portlet project to it. You need to have your maven environment setup with maven and java installed. If you don’t know how to do it I would recommend reading Maven: The Complete Reference from Sonatype, Inc. The chapter 2 has good instructions on how to install maven.

1) Download and install Liferay 6.1.0 bundle. In these posts we assume it’s tomcat bundle but you can use any bundle. I’ll refer to the bundle install location is LIFERAY_HOME from now on. If you need instructions on how to install bundle please refer to Liferay 6.1 User Guide.
2) Create a new directory which will be your project root. This is the location where you would extract Liferay plugins SDK if you were using Ant. Then in that directory create a pom.xml file.
Now you should adjust groupId and artifactId to match you project. Also set the value of liferay.auto.deploy.dir to LIFERAY_HOME/deploy. This is where the plugin is copied for Liferay to deploy. The liferay.version property is set to version of Liferay you are using.
3) Open command prompt or terminal and go to your project directory. Next we’ll going to create a portlet project using a liferay portlet project template. Run
mvn archetype:generate
That command will create a list of available project templates like below:
Choose number 24 or what ever the number you have for com.liferay.maven.archetypes:liferay-portlet-archetype
Next you will be asked to choose the template version:
Choose number 6 or what ever you have for 6.1.0 version.
Next you will be asked to provide groupId, artifactId and version:
For groupId use the same as in the first pom.xml. In my case it would be com.liferay.sample. For artifactId I chose sample-portlet as this is the directory it will create. Version should be the same as the project parent. Once you have confirmed the values maven will create the portlet project and add it to you parent project as module automatically.
Now you project structure should be something like this:
4) Go to sample-portlet directory and run
mvn package
This will compile any classes and packages the portlet war file in target directory.
5) To deploy the portlet into your Liferay bundle you can run
mvn liferay:deploy
Now you have created your first Liferay plugin project with maven and deployed it to your Liferay bundle.

This post was originally published on Liferay blog.

]]>
276
Liferay 6.1 GA1 Maven artifacts released https://jguru.fi/liferay-6-1-ga1-maven-artifacts-released.html?utm_source=rss&utm_medium=rss&utm_campaign=liferay-6-1-ga1-maven-artifacts-released Tue, 10 Jan 2012 00:54:55 +0000 https://web.liferay.com/web/mika.koivisto/blog/-/blogs/liferay-6-1-ga1-maven-artifacts-released  

I’m glad to announce that we have released Liferay maven artifacts to 6.1 GA1. 

All the artifacts will be pushed into the central repository through http://oss.sonatype.org where they are already available. 

This release includes following artifacts:

- portal-client
- portal-impl
- portal-service
- portal-web
- support-tomcat
- util-bridges
- util-java
- util-taglib

In addition to this we’ve packaged the Liferay artifacts into a convenient zip file called /liferay-portal-maven-6.1.0-ce-ga1-20120106155615760.zip with ant script to allow you to deploy them into your local repository easily. We will be providing this for EE releases as well since EE artifacts will not be available from Central.  

We have also released the Liferay maven plugin and archetypes for all types of Liferay plugins:

- liferay-ext-archetype
- liferay-hook-archetype
- liferay-layouttpl-archetype
- liferay-portlet-archetype
- liferay-servicebuilder-archetype
- liferay-theme-archetype
- liferay-web-archetype

I will post later some instructions on how to use those archetypes. If you’ve used the snapshot version there was one last minute change that requires you to now manually set properties liferay.version and liferay.auto.deploy.dir in your pom.xml. 

]]>
I’m glad to announce that we have released Liferay maven artifacts to 6.1 GA1.

All the artifacts will be pushed into the central repository through http://oss.sonatype.org where they are already available. 

This release includes following artifacts:

– portal-client
– portal-impl
– portal-service
– portal-web
– support-tomcat
– util-bridges
– util-java
– util-taglib

In addition to this we’ve packaged the Liferay artifacts into a convenient zip file called /liferay-portal-maven-6.1.0-ce-ga1-20120106155615760.zip with ant script to allow you to deploy them into your local repository easily. We will be providing this for EE releases as well since EE artifacts will not be available from Central.

We have also released the Liferay maven plugin and archetypes for all types of Liferay plugins:

– liferay-ext-archetype
– liferay-hook-archetype
– liferay-layouttpl-archetype
– liferay-portlet-archetype
– liferay-servicebuilder-archetype
– liferay-theme-archetype
– liferay-web-archetype

I will post later some instructions on how to use those archetypes. If you’ve used the snapshot version there was one last minute change that requires you to now manually set properties liferay.version and liferay.auto.deploy.dir in your pom.xml.

This post was originally published on Liferay blog.

]]>
278
Overriding and adding struts actions from hook plugins https://jguru.fi/overriding-adding-struts-actions-hook-plugins.html?utm_source=rss&utm_medium=rss&utm_campaign=overriding-adding-struts-actions-hook-plugins Mon, 17 Jan 2011 22:23:27 +0000 https://web.liferay.com/web/mika.koivisto/blog/-/blogs/overriding-and-adding-struts-actions-from-hook-plugins This is a new cool feature I worked on with Brian and it’s coming on 6.1 as well as 6.0 EE SP2 and 5.2 EE SP6. With this feature you can add new struts actions to portal from a hook plugin and you can override any existing action with it.

There are two interfaces com.liferay.portal.kernel.struts.StrutsAction and com.liferay.portal.kernel.struts.StrutsPortletAction. The StrutsAction is used for regular struts actions like /c/portal/update_password and StrutsPortletAction is used for those that are used from portlets.

Let’s create a new simple hook to test it out. This hook will create a new struts path /c/portal/sample and wraps an existing struts action. Start by creating a new hook plugin in your plugins SDK. I’ll call it sample-struts-action.

./create.sh sample-struts-action

Next edit the liferay-hook.xml and add following fragment:

<?xml version="1.0"?>
<!DOCTYPE hook PUBLIC "-//Liferay//DTD Hook 6.1.0//EN" "http://www.liferay.com/dtd/liferay-hook_6_1_0.dtd">

<hook>
	<portal-properties>portal.properties</portal-properties>
	<custom-jsp-dir>/META-INF/custom_jsps</custom-jsp-dir>
	<struts-action>
		<struts-action-path>/portal/sample</struts-action-path>
		<struts-action-impl>com.liferay.samplestrutsaction.hook.action.SampleStrutsAction</struts-action-impl>
	</struts-action>
	<struts-action>
		<struts-action-path>/message_boards/view</struts-action-path>
		<struts-action-impl>com.liferay.samplestrutsaction.hook.action.SampleStrutsPortletAction</struts-action-impl>
	</struts-action>
</hook>

Next we need to create the struts action like below:

package com.liferay.samplestrutsaction.hook.action;

import com.liferay.portal.kernel.struts.BaseStrutsAction;
import com.liferay.portal.kernel.util.ParamUtil;

import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;

/**
 * @author Mika Koivisto
 */
public class SampleStrutsAction extends BaseStrutsAction {

	public String execute(
		HttpServletRequest request, HttpServletResponse response)
		throws Exception {

		String name = ParamUtil.get(request, "name", "World");

		request.setAttribute("name", name);

		return "/portal/sample.jsp";
	}

}

Next create the second Struts action. This one will actually wrap ViewAction of message boards portlet.

package com.liferay.samplestrutsaction.hook.action;

import com.liferay.portal.kernel.struts.BaseStrutsPortletAction;
import com.liferay.portal.kernel.struts.StrutsPortletAction;

import javax.portlet.ActionRequest;
import javax.portlet.ActionResponse;
import javax.portlet.PortletConfig;
import javax.portlet.RenderRequest;
import javax.portlet.RenderResponse;
import javax.portlet.ResourceRequest;
import javax.portlet.ResourceResponse;

/**
 * @author Mika Koivisto
 */
public class SampleStrutsPortletAction extends BaseStrutsPortletAction {

	public void processAction(
			StrutsPortletAction originalStrutsPortletAction,
			PortletConfig portletConfig, ActionRequest actionRequest,
			ActionResponse actionResponse)
		throws Exception {

		originalStrutsPortletAction.processAction(
			originalStrutsPortletAction, portletConfig, actionRequest,
			actionResponse);
	}

	public String render(
			StrutsPortletAction originalStrutsPortletAction,
			PortletConfig portletConfig, RenderRequest renderRequest,
			RenderResponse renderResponse)
		throws Exception {

		System.out.println("Wrapped /message_boards/view action");

		return originalStrutsPortletAction.render(
			null, portletConfig, renderRequest, renderResponse);
	}

	public void serveResource(
			StrutsPortletAction originalStrutsPortletAction,
			PortletConfig portletConfig, ResourceRequest resourceRequest,
			ResourceResponse resourceResponse)
		throws Exception {

		originalStrutsPortletAction.serveResource(
			originalStrutsPortletAction, portletConfig, resourceRequest,
			resourceResponse);
	}

}

Then we need to create the JSP in docroot/META-INF/custom_jsps/html/portal/sample.jsp

Hello !

And lastly we need to create portal.properties in docroot/WEB-INF/src

auth.public.paths=/portal/sample

Now we are ready to deploy the plugin and see if it works. Just run ant deploy in your plugins sdk to deploy it.

You should see following in your tomcat console:

22:01:29,635 INFO  [AutoDeployDir:167] Processing sample-struts-action-hook-6.1.0.1.war
22:01:29,638 INFO  [HookAutoDeployListener:43] Copying web plugin for /Users/mika/Development/Liferay/git/bundles/deploy/sample-struts-action-hook-6.1.0.1.war
  Expanding: /Users/mika/Development/Liferay/git/bundles/deploy/sample-struts-action-hook-6.1.0.1.war into /Users/mika/Development/Liferay/git/bundles/tomcat-6.0.29/temp/20110117220130299
  Copying 1 file to /Users/mika/Development/Liferay/git/bundles/tomcat-6.0.29/temp/20110117220130299/WEB-INF/classes
  Copying 1 file to /Users/mika/Development/Liferay/git/bundles/tomcat-6.0.29/temp/20110117220130299/WEB-INF/classes
  Copying 1 file to /Users/mika/Development/Liferay/git/bundles/tomcat-6.0.29/temp/20110117220130299/WEB-INF
  Copying 1 file to /Users/mika/Development/Liferay/git/bundles/tomcat-6.0.29/temp/20110117220130299/META-INF
  Copying 12 files to /Users/mika/Development/Liferay/git/bundles/tomcat-6.0.29/webapps/sample-struts-action-hook
  Copying 1 file to /Users/mika/Development/Liferay/git/bundles/tomcat-6.0.29/webapps/sample-struts-action-hook
  Deleting directory /Users/mika/Development/Liferay/git/bundles/tomcat-6.0.29/temp/20110117220130299
22:01:30,486 INFO  [HookAutoDeployListener:49] Hook for /Users/mika/Development/Liferay/git/bundles/deploy/sample-struts-action-hook-6.1.0.1.war copied successfully. Deployment will start in a few seconds.
Jan 17, 2011 10:01:39 PM org.apache.catalina.startup.HostConfig deployDirectory
INFO: Deploying web application directory sample-struts-action-hook
22:01:39,727 INFO  [PluginPackageUtil:1080] Reading plugin package for sample-struts-action-hook
22:01:39,759 INFO  [HookHotDeployListener:432] Registering hook for sample-struts-action-hook
22:01:39,770 INFO  [HookHotDeployListener:717] Hook for sample-struts-action-hook is available for use

Now try to access http://localhost:8080/c/portal/sample. It will ask you to sign in and once you sign in you should see the message Hello World! in your browser. You can add a paramer name to the url to change the message. If you access message boards it will print the message "Wrapped /message_boards/view action" in tomcat console and continue to render message boards as if nothing was changed.

Now our sample was really simple one. The return value from the execute method is the view where the request is dispatched next. This can be path to JSP, an existing struts forward or tiles definition. Returning null means that your action has handled the view already. Now you could try to return for instance portal.terms_of_use to display the terms of use.

You can download this sample plugin from svn://svn.liferay.com/repos/public/plugins/trunk/hooks/sample-struts-action-hook. The username is guest and password is empty.

UPDATE: We changed the API so that the original action is passed in so that you can also wrap it with your own logic instead of replacing. I also added a new hook property auth.public.paths so it allows you to set new public paths from hooks. I also added a StrutsPortletAction into to the sample and that demonstrates wrapping an existing action.

]]>
This is a new cool feature I worked on with Brian and it’s coming on 6.1 as well as 6.0 EE SP2 and 5.2 EE SP6. With this feature you can add new struts actions to portal from a hook plugin and you can override any existing action with it.

There are two interfaces com.liferay.portal.kernel.struts.StrutsAction and com.liferay.portal.kernel.struts.StrutsPortletAction. The StrutsAction is used for regular struts actions like /c/portal/update_password and StrutsPortletAction is used for those that are used from portlets.

Let’s create a new simple hook to test it out. This hook will create a new struts path /c/portal/sample and wraps an existing struts action. Start by creating a new hook plugin in your plugins SDK. I’ll call it sample-struts-action.

./create.sh sample-struts-action

Next edit the liferay-hook.xml and add following fragment:

<?xml version="1.0"?>
<!DOCTYPE hook PUBLIC "-//Liferay//DTD Hook 6.1.0//EN" "http://www.liferay.com/dtd/liferay-hook_6_1_0.dtd">

<hook>
	<portal-properties>portal.properties</portal-properties>
	<custom-jsp-dir>/META-INF/custom_jsps</custom-jsp-dir>
	<struts-action>
		<struts-action-path>/portal/sample</struts-action-path>
		<struts-action-impl>com.liferay.samplestrutsaction.hook.action.SampleStrutsAction</struts-action-impl>
	</struts-action>
	<struts-action>
		<struts-action-path>/message_boards/view</struts-action-path>
		<struts-action-impl>com.liferay.samplestrutsaction.hook.action.SampleStrutsPortletAction</struts-action-impl>
	</struts-action>
</hook>

Next we need to create the struts action like below:

package com.liferay.samplestrutsaction.hook.action;

import com.liferay.portal.kernel.struts.BaseStrutsAction;
import com.liferay.portal.kernel.util.ParamUtil;

import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;

/**
 * @author Mika Koivisto
 */
public class SampleStrutsAction extends BaseStrutsAction {

	public String execute(
		HttpServletRequest request, HttpServletResponse response)
		throws Exception {

		String name = ParamUtil.get(request, "name", "World");

		request.setAttribute("name", name);

		return "/portal/sample.jsp";
	}

}

Next create the second Struts action. This one will actually wrap ViewAction of message boards portlet.

package com.liferay.samplestrutsaction.hook.action;

import com.liferay.portal.kernel.struts.BaseStrutsPortletAction;
import com.liferay.portal.kernel.struts.StrutsPortletAction;

import javax.portlet.ActionRequest;
import javax.portlet.ActionResponse;
import javax.portlet.PortletConfig;
import javax.portlet.RenderRequest;
import javax.portlet.RenderResponse;
import javax.portlet.ResourceRequest;
import javax.portlet.ResourceResponse;

/**
 * @author Mika Koivisto
 */
public class SampleStrutsPortletAction extends BaseStrutsPortletAction {

	public void processAction(
			StrutsPortletAction originalStrutsPortletAction,
			PortletConfig portletConfig, ActionRequest actionRequest,
			ActionResponse actionResponse)
		throws Exception {

		originalStrutsPortletAction.processAction(
			originalStrutsPortletAction, portletConfig, actionRequest,
			actionResponse);
	}

	public String render(
			StrutsPortletAction originalStrutsPortletAction,
			PortletConfig portletConfig, RenderRequest renderRequest,
			RenderResponse renderResponse)
		throws Exception {

		System.out.println("Wrapped /message_boards/view action");

		return originalStrutsPortletAction.render(
			null, portletConfig, renderRequest, renderResponse);
	}

	public void serveResource(
			StrutsPortletAction originalStrutsPortletAction,
			PortletConfig portletConfig, ResourceRequest resourceRequest,
			ResourceResponse resourceResponse)
		throws Exception {

		originalStrutsPortletAction.serveResource(
			originalStrutsPortletAction, portletConfig, resourceRequest,
			resourceResponse);
	}

}

Then we need to create the JSP in docroot/META-INF/custom_jsps/html/portal/sample.jsp

Hello !

And lastly we need to create portal.properties in docroot/WEB-INF/src

auth.public.paths=/portal/sample

Now we are ready to deploy the plugin and see if it works. Just run ant deploy in your plugins sdk to deploy it.

You should see following in your tomcat console:

22:01:29,635 INFO  [AutoDeployDir:167] Processing sample-struts-action-hook-6.1.0.1.war
22:01:29,638 INFO  [HookAutoDeployListener:43] Copying web plugin for /Users/mika/Development/Liferay/git/bundles/deploy/sample-struts-action-hook-6.1.0.1.war
  Expanding: /Users/mika/Development/Liferay/git/bundles/deploy/sample-struts-action-hook-6.1.0.1.war into /Users/mika/Development/Liferay/git/bundles/tomcat-6.0.29/temp/20110117220130299
  Copying 1 file to /Users/mika/Development/Liferay/git/bundles/tomcat-6.0.29/temp/20110117220130299/WEB-INF/classes
  Copying 1 file to /Users/mika/Development/Liferay/git/bundles/tomcat-6.0.29/temp/20110117220130299/WEB-INF/classes
  Copying 1 file to /Users/mika/Development/Liferay/git/bundles/tomcat-6.0.29/temp/20110117220130299/WEB-INF
  Copying 1 file to /Users/mika/Development/Liferay/git/bundles/tomcat-6.0.29/temp/20110117220130299/META-INF
  Copying 12 files to /Users/mika/Development/Liferay/git/bundles/tomcat-6.0.29/webapps/sample-struts-action-hook
  Copying 1 file to /Users/mika/Development/Liferay/git/bundles/tomcat-6.0.29/webapps/sample-struts-action-hook
  Deleting directory /Users/mika/Development/Liferay/git/bundles/tomcat-6.0.29/temp/20110117220130299
22:01:30,486 INFO  [HookAutoDeployListener:49] Hook for /Users/mika/Development/Liferay/git/bundles/deploy/sample-struts-action-hook-6.1.0.1.war copied successfully. Deployment will start in a few seconds.
Jan 17, 2011 10:01:39 PM org.apache.catalina.startup.HostConfig deployDirectory
INFO: Deploying web application directory sample-struts-action-hook
22:01:39,727 INFO  [PluginPackageUtil:1080] Reading plugin package for sample-struts-action-hook
22:01:39,759 INFO  [HookHotDeployListener:432] Registering hook for sample-struts-action-hook
22:01:39,770 INFO  [HookHotDeployListener:717] Hook for sample-struts-action-hook is available for use

Now try to access http://localhost:8080/c/portal/sample. It will ask you to sign in and once you sign in you should see the message Hello World! in your browser. You can add a paramer name to the url to change the message. If you access message boards it will print the message “Wrapped /message_boards/view action” in tomcat console and continue to render message boards as if nothing was changed.

Now our sample was really simple one. The return value from the execute method is the view where the request is dispatched next. This can be path to JSP, an existing struts forward or tiles definition. Returning null means that your action has handled the view already. Now you could try to return for instance portal.terms_of_use to display the terms of use.

You can download this sample plugin from svn://svn.liferay.com/repos/public/plugins/trunk/hooks/sample-struts-action-hook. The username is guest and password is empty.

UPDATE: We changed the API so that the original action is passed in so that you can also wrap it with your own logic instead of replacing. I also added a new hook property auth.public.paths so it allows you to set new public paths from hooks. I also added a StrutsPortletAction into to the sample and that demonstrates wrapping an existing action.

This post was originally published on Liferay blog.

]]>
280
How do I cluster Liferay with Terracotta? https://jguru.fi/cluster-liferay-terracotta.html?utm_source=rss&utm_medium=rss&utm_campaign=cluster-liferay-terracotta Wed, 01 Sep 2010 03:32:59 +0000 https://web.liferay.com/web/mika.koivisto/blog/-/blogs/how-do-i-cluster-liferay-with-terracotta- That's a question I've head many times and in this post I will show you  just how to do that. These instructions are for Liferay 6 CE GA3 Tomcat 6.0 bundle however you can use any app server supported by Terracotta but the location and some configuration might be slightly different. So to get started you need to download:

Next step is to install Liferay and Terracotta. For the purposes of this post I won't go into great detail with the installation as both Terracotta and Liferay has good documentation. Basically the installation consist of unpacking the packages to a directory. From now on I will refer to those locations as LIFERAY_HOME and TERRACOTTA_HOME and inside LIFERAY_HOME we will have tomcat directory which I will refer as TOMCAT_HOME. Normally you would also install Liferay and Terracotta in separate servers but I will post a separate post addressing the recommended architecture. For now we can install everything on the same machine and run Terracotta with default configuration for development purposes.

Normally when clustering Liferay you need to address following components: EhCache and Hibernate, Quartz Scheduler, Document Library, Search Engine and optionally Session Replication. For Document Library and Search Engine Terracotta doesn't offer anything new so you make those centrally available the same way as before. For example SAN for DL and SOLR for Search and Indexing. So we are left with EhCache and Hibernate, Quartz and Session Replication that we can address with Terracotta. 

EhCache and Hibernate Second Level Cache

  1.  Remove ehcache.jar that is bundled with Liferay (located in TOMCAT_HOME/webapps/ROOT/WEB-INF/lib)
  2. Copy all jars in TERRACOTTA_HOME/ehcache/lib to TOMCAT_HOME/webapps/ROOT/WEB-INF/lib
  3. Copy TERRACOTTA_HOME/common/terracotta-toolkit-1.0-runtime-<version>.jar to TOMCAT_HOME/webapps/ROOT/WEB-INF/lib
  4. Create my-ehcache folder to TOMCAT_HOME/webapps/ROOT/WEB-INF/classes
  5. Create a hibernate-terracotta.xml and a liferay-multi-vm-terracotta.xml.
  6. Adjust terracottaConfig in hibernate-terracotta.xml and liferay-multi-vm-terracotta.xml to point to your Terracotta servers. Like this: <terracottaConfig url="localhost:9510"/>
  7. Add following properties to your portal-ext.properties file:
    ehcache.multi.vm.config.location=/my-ehcache/liferay-multi-vm-terracotta.xml
    
net.sf.ehcache.configurationResourceName=/my-ehcache/hibernate-terracotta.xml
    
hibernate.cache.region.factory_class=net.sf.ehcache.hibernate.EhCacheRegionFactory

Quartz

  1. Remove quartz.jar that is bundled with Liferay (located in TOMCAT_HOME/webapps/ROOT/WEB-INF/lib)
  2. Copy TERRACOTTA_HOME/quartz/quartz-terracotta-<version>.jar and quartz-all-<version>.jar to TOMCAT_HOME/webapps/ROOT/WEB-INF/lib
  3. Add following properties to your portal-ext.properties:
    org.quartz.jobStore.class = org.terracotta.quartz.TerracottaJobStore
    
org.quartz.jobStore.tcConfigUrl = localhost:9510
  4. Extract portal.properties from portal-impl.jar and place it in TOMCAT_HOME/webapps/ROOT/WEB-INF/classes
  5. Comment out following properties in portal.properties
    #org.quartz.jobStore.dataSource=ds

    #org.quartz.jobStore.isClustered=false
    
#org.quartz.jobStore.misfireThreshold=60000
    
#org.quartz.jobStore.tablePrefix=QUARTZ_
    
#org.quartz.jobStore.useProperties=false

Session Replication

This is highly container specific so refer to Terracotta documentation for specific instructions. Following steps are for Tomcat 6.0.

  1. Copy TERRACOTTA_HOME/sessions/terracotta-session-<version>.jar to TOMCAT_HOME/lib
  2. Copy TERRACOTTA_HOME/common/terracotta-toolkit-1.0-runtime-<version>.jar to TOMCAT_HOME/lib
  3. Edit TOMCAT_HOME/conf/Catalina/localhost/ROOT.xml and add following line right after <Context>
    
<Valve className="org.terracotta.session.TerracottaTomcat60xSessionValve" tcConfigUrl="localhost:9510"/>

Testing The Configuration

Testing your configuration is simple:

  1. Startup your Terracotta Server
    TERRACOTTA_HOME/bin/start-tc-server.sh
  2. Startup your Tomcat
    TOMCAT_HOME/bin/startup.sh
  3. Before Tomcat has fully started you should see following lines in your Tomcat console log:
    2010-09-01 21:35:40,059 INFO - Terracotta 3.3.0, as of 20100716-140706 (Revision 15922 by cruise@rh5mo0 from 3.3)
    2010-09-01 21:35:40,566 INFO - Successfully loaded base configuration from server at 'localhost:9510'.
  4. Now browse http://localhost:8080 to verify that your portal is running. 
  5. Now launch Terracotta Developer Console to verify that EhCache, Hibernate, Quartz and Sessions are seen by Terracotta. You can launch dev console with following command:
    TERRACOTTA_HOME/bin/dev-console.sh
  6. Once you are connected to your Terracotta you should see Ehcache, Hibernate, Quartz and Sessions under My application which indicates that all of them are connected and recognized by Terracotta. Now you can use Dev Console to see what's inside your cache or session. 

Closing Remarks

Now as you can see it is quite easy the cluster Liferay with Terracotta express installation. Now if you want to use the DSO approach it is whole another beast as it involves tedious instrumentation. If you are a Liferay EE customer and want to get supported version of both Liferay and Terracotta contact your Liferay sales rep and ask about Liferay Terracotta Edition. 

]]>
That’s a question I’ve head many times and in this post I will show you  just how to do that. These instructions are for Liferay 6 CE GA3 Tomcat 6.0 bundle however you can use any app server supported by Terracotta but the location and some configuration might be slightly different. So to get started you need to download:

Next step is to install Liferay and Terracotta. For the purposes of this post I won’t go into great detail with the installation as both Terracotta and Liferay has good documentation. Basically the installation consist of unpacking the packages to a directory. From now on I will refer to those locations as LIFERAY_HOME and TERRACOTTA_HOME and inside LIFERAY_HOME we will have tomcat directory which I will refer as TOMCAT_HOME. Normally you would also install Liferay and Terracotta in separate servers but I will post a separate post addressing the recommended architecture. For now we can install everything on the same machine and run Terracotta with default configuration for development purposes.

Normally when clustering Liferay you need to address following components: EhCache and Hibernate, Quartz Scheduler, Document Library, Search Engine and optionally Session Replication. For Document Library and Search Engine Terracotta doesn’t offer anything new so you make those centrally available the same way as before. For example SAN for DL and SOLR for Search and Indexing. So we are left with EhCache and Hibernate, Quartz and Session Replication that we can address with Terracotta.

EhCache and Hibernate Second Level Cache

  1.  Remove ehcache.jar that is bundled with Liferay (located in TOMCAT_HOME/webapps/ROOT/WEB-INF/lib)
  2. Copy all jars in TERRACOTTA_HOME/ehcache/lib to TOMCAT_HOME/webapps/ROOT/WEB-INF/lib
  3. Copy TERRACOTTA_HOME/common/terracotta-toolkit-1.0-runtime-<version>.jar to TOMCAT_HOME/webapps/ROOT/WEB-INF/lib
  4. Create my-ehcache folder to TOMCAT_HOME/webapps/ROOT/WEB-INF/classes
  5. Create a hibernate-terracotta.xml and a liferay-multi-vm-terracotta.xml.
  6. Adjust terracottaConfig in hibernate-terracotta.xml and liferay-multi-vm-terracotta.xml to point to your Terracotta servers. Like this: <terracottaConfig url=”localhost:9510″/>
  7. Add following properties to your portal-ext.properties file:
    ehcache.multi.vm.config.location=/my-ehcache/liferay-multi-vm-terracotta.xml
    
net.sf.ehcache.configurationResourceName=/my-ehcache/hibernate-terracotta.xml
    
hibernate.cache.region.factory_class=net.sf.ehcache.hibernate.EhCacheRegionFactory

Quartz

  1. Remove quartz.jar that is bundled with Liferay (located in TOMCAT_HOME/webapps/ROOT/WEB-INF/lib)
  2. Copy TERRACOTTA_HOME/quartz/quartz-terracotta-<version>.jar and quartz-all-<version>.jar to TOMCAT_HOME/webapps/ROOT/WEB-INF/lib
  3. Add following properties to your portal-ext.properties:
    org.quartz.jobStore.class = org.terracotta.quartz.TerracottaJobStore
    
org.quartz.jobStore.tcConfigUrl = localhost:9510
  4. Extract portal.properties from portal-impl.jar and place it in TOMCAT_HOME/webapps/ROOT/WEB-INF/classes
  5. Comment out following properties in portal.properties
    #org.quartz.jobStore.dataSource=ds

    #org.quartz.jobStore.isClustered=false
    
#org.quartz.jobStore.misfireThreshold=60000
    
#org.quartz.jobStore.tablePrefix=QUARTZ_
    
#org.quartz.jobStore.useProperties=false

Session Replication

This is highly container specific so refer to Terracotta documentation for specific instructions. Following steps are for Tomcat 6.0.

  1. Copy TERRACOTTA_HOME/sessions/terracotta-session-<version>.jar to TOMCAT_HOME/lib
  2. Copy TERRACOTTA_HOME/common/terracotta-toolkit-1.0-runtime-<version>.jar to TOMCAT_HOME/lib
  3. Edit TOMCAT_HOME/conf/Catalina/localhost/ROOT.xml and add following line right after <Context>
    
<Valve className="org.terracotta.session.TerracottaTomcat60xSessionValve" tcConfigUrl="localhost:9510"/>

Testing The Configuration

Testing your configuration is simple:

  1. Startup your Terracotta Server
    TERRACOTTA_HOME/bin/start-tc-server.sh
  2. Startup your Tomcat
    TOMCAT_HOME/bin/startup.sh
  3. Before Tomcat has fully started you should see following lines in your Tomcat console log:
    2010-09-01 21:35:40,059 INFO - Terracotta 3.3.0, as of 20100716-140706 (Revision 15922 by cruise@rh5mo0 from 3.3)
    2010-09-01 21:35:40,566 INFO - Successfully loaded base configuration from server at 'localhost:9510'.
  4. Now browse http://localhost:8080 to verify that your portal is running.
  5. Now launch Terracotta Developer Console to verify that EhCache, Hibernate, Quartz and Sessions are seen by Terracotta. You can launch dev console with following command:
    TERRACOTTA_HOME/bin/dev-console.sh
  6. Once you are connected to your Terracotta you should see Ehcache, Hibernate, Quartz and Sessions under My application which indicates that all of them are connected and recognized by Terracotta. Now you can use Dev Console to see what’s inside your cache or session

Closing Remarks

Now as you can see it is quite easy the cluster Liferay with Terracotta express installation. Now if you want to use the DSO approach it is whole another beast as it involves tedious instrumentation. If you are a Liferay EE customer and want to get supported version of both Liferay and Terracotta contact your Liferay sales rep and ask about Liferay Terracotta Edition.

This post was originally published on Liferay blog.

]]>
282
Using Freemarker in your theme templates https://jguru.fi/using-freemarker-theme-templates.html?utm_source=rss&utm_medium=rss&utm_campaign=using-freemarker-theme-templates Tue, 24 Aug 2010 18:21:23 +0000 https://web.liferay.com/web/mika.koivisto/blog/-/blogs/using-freemarker-in-your-theme-templates Freemarker is a template language very similar to Velocity. Starting from Liferay 6.0 Liferay supports also Freemarker templates in themes and Web Content templates. In this post I will show how you can use Freemarker in your themes.

Getting started

To get started you'll need Liferay Portal 6.0 GA3 as well as corresponding Plugins SDK. Once you have setup your Portal and Plugins SDK we can start by creating a new theme plugin in PLUGINS_SDK_ROOT/themes folder.

To create the theme issue following command:

./create.[sh|bat] my-freemarker "My Freemarker"

Then go to my-freemarker-theme directory and open build.xml in your favorite editor.

In build.xml add theme.type property with value ftl above theme.parent property like this:

<property name="theme.type" value="ftl"></property>
<property name="theme.parent" value="_styled"></property>

Then you need to create docroot/WEB-INF/liferay-look-and-feel.xml with following contents:

<?xml version="1.0"?>
<!DOCTYPE look-and-feel PUBLIC "-//Liferay//DTD Look and Feel 6.0.0//EN" "http://www.liferay.com/dtd/liferay-look-and-feel_6_0_0.dtd">

<look-and-feel>
	<compatibility>
		<version>6.0.0+</version>
	</compatibility>
	<theme id="my-freemarker-theme" name="My Freemarker">
		<template-extension>ftl</template-extension>
	</theme>
</look-and-feel>

Now you run:

ant deploy

Congratulations you’ve just made your first Freemarker based theme. Now you can override base theme files in docroot/_diffs folder just as you would normally except template files now have extension .ftl instead of .vm.

Freemarker syntax

Freemarker syntax is slightly different from Velocity and it is much more strict. With Freemarker you won't be able to get a way with trying to use undefined variables and you should also note that null value means it's undefined. To test if value exists you can use double question mark after the variable name like this:

<#if someVariableName??>
Variable exists
</#if>

For full syntax reference check out Freemarker website.

Pre-defined theme variables

Most of the variables present for Velocity templates are also available for Freemarker templates. Only Velocity specific tools were removed you can accomplish everything and more with Freemarker build-ins. Here's some examples how to format a java.util.Date type variable with Freemarker build-ins:

${lastUpdated?string("yyyy-MM-dd HH:mm:ss zzzz")}
${lastUpdated?string("EEE, MMM d, ''yy")}
${lastUpdated?string("EEEE, MMMM dd, yyyy, hh:mm:ss a '('zzz')'")}

You can find all the variables available for Freemarker templates from com.liferay.portal.freemarker.FreeMarkerVariables class and docroot/html/themes/_unstyled/init.ftl

Macro libraries

Most of the macros available to Velocity templates are also available for Freemarker templates. The only difference is the syntax how they are used. We provide a macro library with namespace liferay so that it won't get mixed with your own macros. You can take a look at portal-impl/src/FTL_liferay.ftl to see full list of macros and use it as an example to build your own macros. Here are some commonly used macros:

<@liferay.css file_name=“some.css” />

<@liferay.js file_name=“some.js” />

<@liferay.language key=“my-key” />

<@liferay.breadcrumb />

<@liferay.docbar /> 

Tag libraries

Yes, you read it correctly. You can use taglibs in your Freemarker templates. This is something unique to Freemarker and it is limited to only templates in themes. To import a portal taglib to your template just add following line to your template:

<#assign aui=PortalJspTagLibs["/WEB-INF/tld/liferay-aui.tld"]>

Now you can use any tag within that taglib just if it was a macro library. Here's an example how to add a Alloy UI input field:

<@aui.input name=“aStringLiteral” label=“Test” />

Have fun trying this out and if you find any glitches do report them to our issue tracker.

]]>
Freemarker is a template language very similar to Velocity. Starting from Liferay 6.0 Liferay supports also Freemarker templates in themes and Web Content templates. In this post I will show how you can use Freemarker in your themes.

Getting started

To get started you’ll need Liferay Portal 6.0 GA3 as well as corresponding Plugins SDK. Once you have setup your Portal and Plugins SDK we can start by creating a new theme plugin in PLUGINS_SDK_ROOT/themes folder.

To create the theme issue following command:

./create.[sh|bat] my-freemarker "My Freemarker"

Then go to my-freemarker-theme directory and open build.xml in your favorite editor.

In build.xml add theme.type property with value ftl above theme.parent property like this:

<property name="theme.type" value="ftl"></property>
<property name="theme.parent" value="_styled"></property>

Then you need to create docroot/WEB-INF/liferay-look-and-feel.xml with following contents:

<?xml version="1.0"?>
<!DOCTYPE look-and-feel PUBLIC "-//Liferay//DTD Look and Feel 6.0.0//EN" "http://www.liferay.com/dtd/liferay-look-and-feel_6_0_0.dtd">

<look-and-feel>
	<compatibility>
		<version>6.0.0+</version>
	</compatibility>
	<theme id="my-freemarker-theme" name="My Freemarker">
		<template-extension>ftl</template-extension>
	</theme>
</look-and-feel>

Now you run:

ant deploy

Congratulations you’ve just made your first Freemarker based theme. Now you can override base theme files in docroot/_diffs folder just as you would normally except template files now have extension .ftl instead of .vm.

Freemarker syntax

Freemarker syntax is slightly different from Velocity and it is much more strict. With Freemarker you won’t be able to get a way with trying to use undefined variables and you should also note that null value means it’s undefined. To test if value exists you can use double question mark after the variable name like this:

<#if someVariableName??>
Variable exists
</#if>

For full syntax reference check out Freemarker website.

Pre-defined theme variables

Most of the variables present for Velocity templates are also available for Freemarker templates. Only Velocity specific tools were removed you can accomplish everything and more with Freemarker build-ins. Here’s some examples how to format a java.util.Date type variable with Freemarker build-ins:

${lastUpdated?string("yyyy-MM-dd HH:mm:ss zzzz")}
${lastUpdated?string("EEE, MMM d, ''yy")}
${lastUpdated?string("EEEE, MMMM dd, yyyy, hh:mm:ss a '('zzz')'")}

You can find all the variables available for Freemarker templates from com.liferay.portal.freemarker.FreeMarkerVariables class and docroot/html/themes/_unstyled/init.ftl

Macro libraries

Most of the macros available to Velocity templates are also available for Freemarker templates. The only difference is the syntax how they are used. We provide a macro library with namespace liferay so that it won’t get mixed with your own macros. You can take a look at portal-impl/src/FTL_liferay.ftl to see full list of macros and use it as an example to build your own macros. Here are some commonly used macros:

<@liferay.css file_name=“some.css” />

<@liferay.js file_name=“some.js” />

<@liferay.language key=“my-key” />

<@liferay.breadcrumb />

<@liferay.docbar />

Tag libraries

Yes, you read it correctly. You can use taglibs in your Freemarker templates. This is something unique to Freemarker and it is limited to only templates in themes. To import a portal taglib to your template just add following line to your template:

<#assign aui=PortalJspTagLibs["/WEB-INF/tld/liferay-aui.tld"]>

Now you can use any tag within that taglib just if it was a macro library. Here’s an example how to add a Alloy UI input field:

<@aui.input name=“aStringLiteral” label=“Test” />

Have fun trying this out and if you find any glitches do report them to our issue tracker.

This post was originally published on Liferay blog.

]]>
284
Liferay Maven SDK https://jguru.fi/liferay-maven-sdk.html?utm_source=rss&utm_medium=rss&utm_campaign=liferay-maven-sdk Tue, 15 Dec 2009 11:16:53 +0000 https://web.liferay.com/web/mika.koivisto/blog/-/blogs/liferay-maven-sdk_1 Starting to recover from jetlag after a two week trip Los Angeles and Liferay retreat. One of the things we finally made some progress during the developer retreat  is providing official maven artifacts for Liferay as well as porting our plugins sdk to Maven. Things are not quite completed but I will provide some instructions here for all early adopters.

So our goal is to provide our CE releases through our own public repository as well as provide means for our EE customers to install the EE versions artifacts to their local maven repository.

If you have ever worked with enterprise projects using maven you already know how important a local maven repository and proxy is. For those not so familiar with Maven a proxy is a server that proxies your requests to public Maven repositories and caches the artifacts locally for faster and more reliable access. Most maven proxies can also host private repositories used for hosting your company's private artifacts. Having a local proxy / repository makes your maven builds much faster and more reliable than accessing remote repositories that might even sometimes be unavailable.

1. Installing a maven proxy / repository

First step is to install and setup Nexus. Nexus is a open source maven repository manager that can proxy to other repositories as well as host repositories. If you just want to try things locally you can skip this step.

  1. Download latest Nexus such as nexus-webapp-1.4.0-bundle.zip
  2. Follow the installation directions of the Nexus book http://nexus.sonatype.org/documentation.html
  3. Startup nexus
  4. Open your browser to your newly created nexus (if you installed it locally it could be accessed by opening http://localhost:8080/nexus)
  5. Login as administrator (default login is admin / admin123)
  6. Go to Repositories and click Add -> Hosted Repository
  7. Give the repository following information and click save
    • Repository ID: liferay-ce-releases
    • Repostory Name: Liferay CE Release Repository
    • Provider: Maven2 Repository
    • Repository Policy: Release
  8. Create another hosted repository with following information
    • Repository ID: liferay-ce-snapshots
    • Repository Name: Liferay CE Snapshot Repository
    • Provider: Maven2 Repository
    • Repository Policy: Snapshot

Now you have a repository ready for Liferay's Maven artifacts. Next step is to configure your maven to be able to upload artifacts to that repository.

 2. Configuring Maven Settings

Open your $HOME/.m2/settings.xml (if the file does not exist create it). Add the servers segment to your settings.xml

<?xml version="1.0" encoding="UTF-8"?>
<settings>
     <servers>
          <server>
               <id>liferay</id>
               <username>admin</username>
               <password>admin123</password>
          </server>
     </servers>
</settings>

You might also want to make your Nexus as your maven proxy. To do that just add following xml segment to your settings.xml right before servers element.

<mirrors>
     <mirror>
          <id>local</id>
          <name>Local mirror repository</name>
          <url>http://localhost:8080/nexus/content/groups/public</url>
          <mirrorOf>*</mirrorOf>
     </mirror>
</mirrors>

3. Installing Liferay Artifacts to Repository

Next we will install the Liferay Maven artifacts to your repository. First you need to checkout Liferay code from the SVN. 

svn --username guest co svn://svn.liferay.com/repos/public/portal/trunk portal-trunk

Guest user does not require password.

Then create a release.${username}.properties file and add

maven.url=http://localhost:8080/nexus/content/repositories/liferay-ce-snapshots

Build Liferay artifacts by running

ant clean start jar

Now you can deploy the Liferay artifacts to your maven repository by running

ant -f build-maven.xml deploy-artifacts

If you only want to have them locally without a maven repository you can run the install task instead of deploy

ant -f build-maven.xml install-artifacts

Now you can add Liferay dependencies to your maven project. Following artifacts are available:

<dependency>
	<groupId>com.liferay.portal</groupId>
	<artifactId>portal-client</artifactId>
	<version>5.3.0-SNAPSHOT</version>
</dependency>
<dependency>
	<groupId>com.liferay.portal</groupId>
	<artifactId>portal-impl</artifactId>
	<version>5.3.0-SNAPSHOT</version>
	<scope>provided</scope>
</dependency>
<dependency>
	<groupId>com.liferay.portal</groupId>
	<artifactId>portal-kernel</artifactId>
	<version>5.3.0-SNAPSHOT</version>
	<scope>provided</scope>
</dependency>
<dependency>
	<groupId>com.liferay.portal</groupId>
	<artifactId>portal-service</artifactId>
	<version>5.3.0-SNAPSHOT</version>
	<scope>provided</scope>
</dependency>
<dependency>
	<groupId>com.liferay.portal</groupId>
	<artifactId>portal-web</artifactId>
	<version>5.3.0-SNAPSHOT</version>
	<scope>provided</scope>
</dependency>
<dependency>
	<groupId>com.liferay.portal</groupId>
	<artifactId>util-bridges</artifactId>
	<version>5.3.0-SNAPSHOT</version>
</dependency>
<dependency>
	<groupId>com.liferay.portal</groupId>
	<artifactId>util-java</artifactId>
	<version>5.3.0-SNAPSHOT</version>
</dependency>
<dependency>
	<groupId>com.liferay.portal</groupId>
	<artifactId>util-taglib</artifactId>
	<version>5.3.0-SNAPSHOT</version>
</dependency>
NOTE portal-impl and portal-web are provided for maven plugins and should never be added as dependency to your Liferay plugins.

 4. Installing the Liferay Maven SDK

To take full advantage of Maven we are porting the functionality of out ant based Plugins SDK to Maven. To use it you need to install it locally. To install the Liferay maven plugins and archetypes go into support-maven folder and run

mvn install

Now the Liferay Maven SDK is installed and ready to use. We've implemented a portlet archetype and deployer plugin.

5. Creating a Portlet Plugin

Move to the folder where you want to create your portlet and run

mvn archetype:generate

From the list select liferay-portlet-archetype and provide your project groupId, artifactId and version for the portlet project.

You're portlet project's pom.xml has two properties liferay.auto.deploy.dir and liferay.version. These properties are usually moved to your parent pom.xml or settings.xml so that you don't have to adjust them for every single plugin you create. Set the liferay.auto.deploy.dir to point to the Liferay autodeploy directory of your Liferay bundle. This is where the deploy plugin will copy your portlet. Now you are ready deploy your newly created portlet. You can deploy it by running

mvn liferay:deploy

6. Future Plans

We are also in the process of adding archetypes for themes, hooks and layouts as well as providing portlet archetypes for different types of portlets like JSP, Spring MVC, JSF etc. I will blog about it once they are done.

A special thanks goes to Thiago Moreira and Brian Chan for making this possible. Also for the community and customers for putting pressure to have this done.

If you are using 5.2.3 CE and want to take advantage of Maven for building Liferay portlets Milen Dyankov a Liferay community member has done also great work on a Maven SDK for 5.2.3 CE. You can find more about it from GitHub

]]>
Starting to recover from jetlag after a two week trip Los Angeles and Liferay retreat. One of the things we finally made some progress during the developer retreat  is providing official maven artifacts for Liferay as well as porting our plugins sdk to Maven. Things are not quite completed but I will provide some instructions here for all early adopters.

So our goal is to provide our CE releases through our own public repository as well as provide means for our EE customers to install the EE versions artifacts to their local maven repository.

If you have ever worked with enterprise projects using maven you already know how important a local maven repository and proxy is. For those not so familiar with Maven a proxy is a server that proxies your requests to public Maven repositories and caches the artifacts locally for faster and more reliable access. Most maven proxies can also host private repositories used for hosting your company’s private artifacts. Having a local proxy / repository makes your maven builds much faster and more reliable than accessing remote repositories that might even sometimes be unavailable.

1. Installing a maven proxy / repository

First step is to install and setup Nexus. Nexus is a open source maven repository manager that can proxy to other repositories as well as host repositories. If you just want to try things locally you can skip this step.

  1. Download latest Nexus such as nexus-webapp-1.4.0-bundle.zip
  2. Follow the installation directions of the Nexus book http://nexus.sonatype.org/documentation.html
  3. Startup nexus
  4. Open your browser to your newly created nexus (if you installed it locally it could be accessed by opening http://localhost:8080/nexus)
  5. Login as administrator (default login is admin / admin123)
  6. Go to Repositories and click Add -> Hosted Repository
  7. Give the repository following information and click save
    • Repository ID: liferay-ce-releases
    • Repostory Name: Liferay CE Release Repository
    • Provider: Maven2 Repository
    • Repository Policy: Release
  8. Create another hosted repository with following information
    • Repository ID: liferay-ce-snapshots
    • Repository Name: Liferay CE Snapshot Repository
    • Provider: Maven2 Repository
    • Repository Policy: Snapshot

Now you have a repository ready for Liferay’s Maven artifacts. Next step is to configure your maven to be able to upload artifacts to that repository.

 2. Configuring Maven Settings

Open your $HOME/.m2/settings.xml (if the file does not exist create it). Add the servers segment to your settings.xml

<?xml version="1.0" encoding="UTF-8"?>
<settings>
     <servers>
          <server>
               <id>liferay</id>
               <username>admin</username>
               <password>admin123</password>
          </server>
     </servers>
</settings>

You might also want to make your Nexus as your maven proxy. To do that just add following xml segment to your settings.xml right before servers element.

<mirrors>
     <mirror>
          <id>local</id>
          <name>Local mirror repository</name>
          <url>http://localhost:8080/nexus/content/groups/public</url>
          <mirrorOf>*</mirrorOf>
     </mirror>
</mirrors>

3. Installing Liferay Artifacts to Repository

Next we will install the Liferay Maven artifacts to your repository. First you need to checkout Liferay code from the SVN.

svn --username guest co svn://svn.liferay.com/repos/public/portal/trunk portal-trunk

Guest user does not require password.

Then create a release.${username}.properties file and add

maven.url=http://localhost:8080/nexus/content/repositories/liferay-ce-snapshots

Build Liferay artifacts by running

ant clean start jar

Now you can deploy the Liferay artifacts to your maven repository by running

ant -f build-maven.xml deploy-artifacts

If you only want to have them locally without a maven repository you can run the install task instead of deploy

ant -f build-maven.xml install-artifacts

Now you can add Liferay dependencies to your maven project. Following artifacts are available:

<dependency>
	<groupId>com.liferay.portal</groupId>
	<artifactId>portal-client</artifactId>
	<version>5.3.0-SNAPSHOT</version>
</dependency>
<dependency>
	<groupId>com.liferay.portal</groupId>
	<artifactId>portal-impl</artifactId>
	<version>5.3.0-SNAPSHOT</version>
	<scope>provided</scope>
</dependency>
<dependency>
	<groupId>com.liferay.portal</groupId>
	<artifactId>portal-kernel</artifactId>
	<version>5.3.0-SNAPSHOT</version>
	<scope>provided</scope>
</dependency>
<dependency>
	<groupId>com.liferay.portal</groupId>
	<artifactId>portal-service</artifactId>
	<version>5.3.0-SNAPSHOT</version>
	<scope>provided</scope>
</dependency>
<dependency>
	<groupId>com.liferay.portal</groupId>
	<artifactId>portal-web</artifactId>
	<version>5.3.0-SNAPSHOT</version>
	<scope>provided</scope>
</dependency>
<dependency>
	<groupId>com.liferay.portal</groupId>
	<artifactId>util-bridges</artifactId>
	<version>5.3.0-SNAPSHOT</version>
</dependency>
<dependency>
	<groupId>com.liferay.portal</groupId>
	<artifactId>util-java</artifactId>
	<version>5.3.0-SNAPSHOT</version>
</dependency>
<dependency>
	<groupId>com.liferay.portal</groupId>
	<artifactId>util-taglib</artifactId>
	<version>5.3.0-SNAPSHOT</version>
</dependency>
NOTE portal-impl and portal-web are provided for maven plugins and should never be added as dependency to your Liferay plugins.

 4. Installing the Liferay Maven SDK

To take full advantage of Maven we are porting the functionality of out ant based Plugins SDK to Maven. To use it you need to install it locally. To install the Liferay maven plugins and archetypes go into support-maven folder and run

mvn install

Now the Liferay Maven SDK is installed and ready to use. We’ve implemented a portlet archetype and deployer plugin.

5. Creating a Portlet Plugin

Move to the folder where you want to create your portlet and run

mvn archetype:generate

From the list select liferay-portlet-archetype and provide your project groupId, artifactId and version for the portlet project.

You’re portlet project’s pom.xml has two properties liferay.auto.deploy.dir and liferay.version. These properties are usually moved to your parent pom.xml or settings.xml so that you don’t have to adjust them for every single plugin you create. Set the liferay.auto.deploy.dir to point to the Liferay autodeploy directory of your Liferay bundle. This is where the deploy plugin will copy your portlet. Now you are ready deploy your newly created portlet. You can deploy it by running

mvn liferay:deploy

6. Future Plans

We are also in the process of adding archetypes for themes, hooks and layouts as well as providing portlet archetypes for different types of portlets like JSP, Spring MVC, JSF etc. I will blog about it once they are done.

A special thanks goes to Thiago Moreira and Brian Chan for making this possible. Also for the community and customers for putting pressure to have this done.

If you are using 5.2.3 CE and want to take advantage of Maven for building Liferay portlets Milen Dyankov a Liferay community member has done also great work on a Maven SDK for 5.2.3 CE. You can find more about it from GitHub

This post was originally published on Liferay blog.

]]>
286
SiteMinder integration is here https://jguru.fi/siteminder-integration-is-here.html?utm_source=rss&utm_medium=rss&utm_campaign=siteminder-integration-is-here Fri, 03 Oct 2008 21:37:15 +0000 https://web.liferay.com/web/mika.koivisto/blog/-/blogs/siteminder-integration-is-here You've been heard! Out of box SiteMinder integration is here.

Computer Associate’s (CA) SiteMinder is a centralized web access management system that enables user authentication and single sign-on, policy-based authorization, identity federation, and auditing of access to Web applications and portals.

Liferay has out of box SiteMinder integration as of recent Liferay 5.1.2 release. The integration is based on CAS integration and only supports authenticating with screenName. It also knows how to properly terminate SiteMinder session. SiteMinder is usually connected to a LDAP so this integration is also able to import users from LDAP.

You can enable it either throught portal-ext.properties or UI just like with CAS or OpenSSO.

Enabling from portal-ext.properties:

##
## SiteMinder
##

    #
    # Set this to true to enable CA SiteMinder single sign on. If set to true,
    # then the property "auto.login.hooks" must contain a reference to the class
    # com.liferay.portal.security.auth.SiteMinderAutoLogin and the
    # "logout.events.post" must have a reference to
    # com.liferay.portal.events.SiteMinderLogoutAction for logout to work.
    #
    siteminder.auth.enabled=true

    #
    # A user may be authenticated from SiteMinder and not yet exist in the
    # portal. Set this to true to automatically import users from LDAP if they
    # do not exist in the portal.
    #
    siteminder.import.from.ldap=true

    #
    # Set this to the name of the user header that SiteMinder passes to the
    # portal.
    #
    siteminder.user.header=SM_USER

Enabling from Liferay UI:

]]>
You’ve been heard! Out of box SiteMinder integration is here.

Computer Associate’s (CA) SiteMinder is a centralized web access management system that enables user authentication and single sign-on, policy-based authorization, identity federation, and auditing of access to Web applications and portals.

Liferay has out of box SiteMinder integration as of recent Liferay 5.1.2 release. The integration is based on CAS integration and only supports authenticating with screenName. It also knows how to properly terminate SiteMinder session. SiteMinder is usually connected to a LDAP so this integration is also able to import users from LDAP.

You can enable it either throught portal-ext.properties or UI just like with CAS or OpenSSO.

Enabling from portal-ext.properties:

##
## SiteMinder
##

    #
    # Set this to true to enable CA SiteMinder single sign on. If set to true,
    # then the property "auto.login.hooks" must contain a reference to the class
    # com.liferay.portal.security.auth.SiteMinderAutoLogin and the
    # "logout.events.post" must have a reference to
    # com.liferay.portal.events.SiteMinderLogoutAction for logout to work.
    #
    siteminder.auth.enabled=true

    #
    # A user may be authenticated from SiteMinder and not yet exist in the
    # portal. Set this to true to automatically import users from LDAP if they
    # do not exist in the portal.
    #
    siteminder.import.from.ldap=true

    #
    # Set this to the name of the user header that SiteMinder passes to the
    # portal.
    #
    siteminder.user.header=SM_USER

Enabling from Liferay UI:

This post was originally published on Liferay blog.

]]>
288
Configuring ActiveMQ 5 jms topic in Tomcat 6 https://jguru.fi/configuring-activemq-5-jms-topic-in-tomcat-6.html?utm_source=rss&utm_medium=rss&utm_campaign=configuring-activemq-5-jms-topic-in-tomcat-6 https://jguru.fi/configuring-activemq-5-jms-topic-in-tomcat-6.html#respond Tue, 11 Dec 2007 14:15:27 +0000 http://javaguru.fi/?p=26 For some reason it is quite difficult to find a clear instruction on howto configure ActiveMQ jms topic in tomcat as a JNDI reference and the consume message from it into message driven pojo. I chose to use ActiveMQ 5 since it requires less dependent libraries to run than previous versions.

Start by downloading ActiveMQ 5.0.0 from Apache ActiveMQ site

You need following jars to be located under CATALINA_HOME/lib:
– activemq-core-5.0.0.jar
– commons-logging-1.1.jar
– geronimo-j2ee-management_1.0_spec-1.0.jar (or another jar that has javax.management apis)
– geronimo-jms_1.1_spec-1.0.jar (or another jar that has javax.jms apis)
– geronimo-jta_1.0.1B_spec-1.0.jar (or another jar that has javax.transaction apis)

You can find above libraries from ACTIVEMQ_HOME/lib

That configure the topic and connection factory to CATALINA_HOME/conf/server.xml

<Resource 
	name="jms/ConnectionFactory" 
	auth="Container" 
	type="org.apache.activemq.ActiveMQConnectionFactory" 
	description="JMS Connection Factory"
	factory="org.apache.activemq.jndi.JNDIReferenceFactory" 
	brokerURL="vm://localhost" brokerName="LocalActiveMQBroker"/>

<Resource 
	name="jms/SampleTopic" 
	auth="Container" 
	type="org.apache.activemq.command.ActiveMQTopic" 
	description="my Topic"
	factory="org.apache.activemq.jndi.JNDIReferenceFactory" 
	physicalName="SAMPLE.TOPIC"/>

Then you need to add resource-link to either CATALINA_HOME/conf/context.xml or webapps META-INF/context.xml

<Context>
....
	<ResourceLink global="jms/ConnectionFactory" name="jms/ConnectionFactory" type="javax.jms.ConnectionFactory"/>
	<ResourceLink global="jms/SampleTopic" name="jms/SampleTopic" type="javax.jms.Topic"/>
</context>

You also need to add a resource-ref to your webapps web.xml

<resource-ref>
	<res-ref-name>jms/ConnectionFactory</res-ref-name>
	<res-type>javax.jms.ConnectionFactory</res-type>
	<res-auth>Container</res-auth>
	<res-sharing-scope>Shareable</res-sharing-scope>
</resource-ref>	
<resource-ref>
	<res-ref-name>jms/SampleTopic</res-ref-name>
	<res-type>javax.jms.Topic</res-type>
	<res-auth>Container</res-auth>
	<res-sharing-scope>Shareable</res-sharing-scope>
</resource-ref>

Then configure message driven pojo with spring. You should notice that this is really a pojo that does not know anything about jms.

 

package fi.javaguru.mdp;

public class SamplePojo {

    public void doSomething(final String msg) {

        System.out.println("Got message: " + msg);
    }
}

Spring configuration for consuming messages

<beans xmlns="http://www.springframework.org/schema/beans"
	xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
	xmlns:jee="http://www.springframework.org/schema/jee"
	xsi:schemaLocation="
http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.0.xsd
http://www.springframework.org/schema/jee http://www.springframework.org/schema/jee/spring-jee-2.0.xsd">

    <jee:jndi-lookup id="jmsConnectionFactory" jndi-name="jms/ConnectionFactory" resource-ref="true"/>
    <jee:jndi-lookup id="jmsTopic" jndi-name="jms/SampleTopic"
		resource-ref="true" proxy-interface="javax.jms.Topic"/>

    <bean id="pojo" class="fi.javaguru.mdp.SamplePojo" />
	
    <bean id="listener" class="org.springframework.jms.listener.adapter.MessageListenerAdapter">
        <property name="delegate" ref="pojo"/>
        <property name="defaultListenerMethod" value="doSomething"/>
    </bean>

    <bean id="container" class="org.springframework.jms.listener.SimpleMessageListenerContainer">
        <property name="connectionFactory" ref="jmsConnectionFactory"/>
        <property name="messageListener" ref="listener"/>
        <property name="destination" ref="jmsTopic"/>
    </bean>

</beans>

This sample assumes you are sending String messages to the topic. You could also send other objects as long as the consumer knows about those objects. Thats it for now. I will write another post later that will continue this sample with producing messages to a topic.

]]>
https://jguru.fi/configuring-activemq-5-jms-topic-in-tomcat-6.html/feed 0 26