Blog

Magic templates in Plone 5

erstellt von Philip Bauer zuletzt geändert: 2017-03-01T12:22:05+01:00
Due to the new rendering-engine chameleon it is fun again to write templates

Plone 5 uses Chameleon a its rendering engine. Did you know that because of that you can put a pdb in a template? If you saw the keynote by Eric Steele on Plone 5 you probably do.

But did you also know that the variable econtext holds all current variables up to the moment the pdb is thrown?

Let's put a pdb in newsitem.pt:

(...)

<metal:content-core fill-slot="content-core">
    <metal:block define-macro="content-core"
          tal:define="templateId template/getId;
                      scale_func context/@@images;
                      scaled_image python: getattr(context.aq_explicit, 'image', False) and scale_func.scale('image', scale='mini')">

<?python import pdb; pdb.set_trace() ?>

    <figure class="newsImageContainer"
         tal:condition="python: scaled_image">
        <a href="#"
           tal:define="here_url context/@@plone_context_state/object_url;
                       large_image python: scale_func.scale('image', scale='large');"
           tal:attributes="href large_image/url">
          <img tal:replace="structure python: scaled_image.tag(css_class='newsImage')" />

(...)

When rendering a News Item the variable scaled_image is accessible as econtext['scaled_image']:

> /Users/philip/workspace/test/f56e9585b89e34318d1171acc17f531a7a428a1f.py(132)render_content_core()
(Pdb) econtext['scaled_image']
<plone.namedfile.scaling.ImageScale object at 0x110073b90>
(Pdb) econtext['scaled_image'].width
200 

You can inspect the whole econtext:

(Pdb) from pprint import pprint as pp
(Pdb) pp econtext
{'__convert': <function translate at 0x10fb4d7d0>,
 '__decode': <function decode at 0x10fb4d578>,
 '__slot_content_core': deque([]),
 '__slot_javascript_head_slot': deque([]),
 '__translate': <function translate at 0x10fb4d7d0>,
 'ajax_include_head': None,
 'ajax_load': False,
 'args': (),
 'body_class': 'template-newsitem_view portaltype-news-item site-Plone section-super-news userrole-manager userrole-authenticated userrole-owner plone-toolbar-left-default',
 'checkPermission': <bound method MembershipTool.checkPermission of <MembershipTool at /Plone/portal_membership used for /Plone/super-news>>,
 'container': <NewsItem at /Plone/super-news>,
 'context': <NewsItem at /Plone/super-news>,
 'context_state': <Products.Five.metaclass.ContextState object at 0x10db00910>,
 'default': <object object at 0x100291bf0>,
 'dummy': None,
 'here': <NewsItem at /Plone/super-news>,
 'isRTL': False,
 'lang': 'de',
 'loop': {},
 'modules': <Products.PageTemplates.ZRPythonExpr._SecureModuleImporter instance at 0x102e31b48>,
 'nothing': None,
 'options': {},
 'plone_layout': <Products.Five.metaclass.LayoutPolicy object at 0x10db00310>,
 'plone_view': <Products.Five.metaclass.Plone object at 0x10db00dd0>,
 'portal_state': <Products.Five.metaclass.PortalState object at 0x10fbe8d50>,
 'portal_url': 'http://localhost:8080/Plone',
 'repeat': {},
 'request': <HTTPRequest, URL=http://localhost:8080/Plone/super-news/newsitem_view>,
 'root': <Application at >,
 'scale_func': <Products.Five.metaclass.ImageScaling object at 0x10c2bf390>,
 'scaled_image': <plone.namedfile.scaling.ImageScale object at 0x110073b90>,
 'site_properties': <SimpleItemWithProperties at /Plone/portal_properties/site_properties>,
 'sl': False,
 'sr': False,
 'target_language': None,
 'template': <Products.Five.browser.pagetemplatefile.ViewPageTemplateFile object at 0x10fff3ed0>,
 'templateId': 'newsitem.pt',
 'toolbar_class': 'pat-toolbar initialized plone-toolbar-left',
 'translate': <function translate at 0x10fb4d7d0>,
 'traverse_subpath': [],
 'user': <PropertiedUser 'adminstarzel'>,
 'view': <Products.Five.metaclass.SimpleViewClass from /Users/philip/workspace/test/src-mrd/plone.app.contenttypes/plone/app/contenttypes/browser/templates/newsitem.pt object at 0x10cd93910>,
 'views': <Products.Five.browser.pagetemplatefile.ViewMapper object at 0x10f5cc190>,
 'wrapped_repeat': <Products.PageTemplates.Expressions.SafeMapping object at 0x10ffba1b0>}

Using n you can actually walk down the template and inspect new variables as they appear. After pressing n about 11 times the variable large_image appears as econtext['large_image'].

(Pdb) econtext['large_image'].width
768

The pdb-session you are in is no restricted python but real python. This means you can do the following:

(Pdb) from plone import api
(Pdb) portal = api.portal.get_tool('portal_memberdata')
(Pdb) memberdata = api.portal.get_tool('portal_memberdata')
(Pdb) memberdata.getProperty('wysiwyg_editor')
'kupu'

Hey, what is kupu doing there? I found that in some of our sites that were migrated from Plone 3 this old setting prevented TinyMCE to work in static portlets. But that is a different story, let's just get rid of it.

(Pdb) memberdata.wysiwyg_editor = 'TinyMCE'
(Pdb) import transaction; transaction.commit()

This is a full-grown pdb and you can inspect and modify your complete site with it.

But there is more: You can actually have complete code-blocks into templates:

<?python

from plone import api
catalog = api.portal.get_tool('portal_catalog')
results = []
for brain in catalog(portal_type='Folder'):
    results.append(brain.getURL())

?>

<ul>
    <li tal:repeat="result results">
      ${result}
    </li>
</ul>

Quick and dirty? Maybe dirty but really quick! It is still very true that having logic in templates is bad practice but I think in some use-cases it is ok:

  • Debugging
  • When you customize a existing template (with z3c.jbot or plone.app.themingplugins) and need some more logic
  • When you quickly need to add some logic to a browser-view that only has a template but no class of it's own

Have fun templating with Plone 5! If you want to learn more about Plone 5 you can still register for the training "Mastering Plone 5 Development" in March 2.-6. (http://www.starzel.de/leistungen/training/)

Update:

As disucced here you can add the econtext to the the locals() by using the following stanza:

<?python locals().update(econtext); import pdb; pdb.set_trace() ?>

 

Ansible DebOps and how to move Gitlab to it

erstellt von Steffen Lindner zuletzt geändert: 2015-03-24T14:09:17+01:00
This howto shows how to move a non-debops Gitlab installation to a debops-managed installation (and let debops do upgrades).

For some time we manage our infrastructure with Ansible, mainly we plugged together several roles from the Ansible Galaxy (the pypi of Ansible roles). It helped us get our servers into a better state without that much effort. Portknox.net our ownCloud Hosting benefits a lot from this move.

In search for a Jenkins role, we discovered Debops. DebOps is "a collection of Ansible playbooks, scalable from one container to an entire data center." It does not have a Jenkins role yet (we are working on it), but it really shines in Debian/Ansible best-practices and well developed and integrated Playbooks. Maciej Delmanowski the inital creator of DebOps is outstanding helpful and constantly pushing new features and roles.

Recently we transfered two Gitlab installations (and updated it). Here is how we did it:

You need a working debops gitlab install on a new server. See Getting started and debops.gitlab.

To be able to import a backup both Gitlab Installations need be the same version (and same database):

On the old server:

$ su git
$ cd gitlab
$ git rev-parse --verify HEAD
$ 1660aa23e3f6bea8e0de54a420e29953f6bd194f #Save the hash
$ cd gitlab-shell
$ git rev-parse --verify HEAD
$ a3b54457b1cd188981d4d0775fc7acf2fd6aa128 #Save the hash
# Do the backup
$ bundle exec rake gitlab:backup:create RAILS_ENV=production
# Omnibus install
$ gitlab-rake gitlab:backup:create

Transfer the backup from gitlab/tmp/backups/1423229329_gitlab_backup.tar to the new server /var/backups/gitlab.

On the new server:

$ su git
$ cd gitlab
$ GIT_WORK_TREE=/var/local/git/gitlab git checkout -f 1660aa23e3f6bea8e0de54a420e29953f6bd194f
$ cd gitlab-shell
$ GIT_WORK_TREE=/var/local/git/gitlab-shell git checkout -f a3b54457b1cd188981d4d0775fc7acf2fd6aa128
$ cd gitlab
$ mysql # Login into mysql shell
# Drop the fresh gitlab db from newly installed gitlab
$ drop database gitlabhq_production;
$ bundle install
$ bundle exec rake gitlab:backup:restore RAILS_ENV=production BACKUP=1423229329 #With backup timestamp
# Omnibus install
# gitlab-rake gitlab:backup:restore
$ /etc/init.d/gitlab restart
# Check if gitlab is working and your data is there
$ /etc/init.d/gitlab stop
$ cd src/gitlab.com/gitlab-org/gitlab-shell
$ GIT_WORK_TREE=/var/local/git/gitlab-shell git checkout master
$ cd src/gitlab.com/gitlab-org/gitlab
$ GIT_WORK_TREE=/var/local/git/gitlab git checkout master
# Run debops against the host to let it upgrade
$ debops -l <hostname> -t gitlab

Lean back and let DebOps help you with coming upgrades of Gitlab.

Useful links:

Plone Conference 2014: The Highlights

erstellt von Philip Bauer zuletzt geändert: 2017-03-01T12:22:06+01:00
The Plone Conference has once again proven its value. There were many excellent talks and everyone had a great time. In Open Spaces and during many discussions between talks the current state and the future of Plone became much clearer.

Plone 5

The main thing is Plone 5. It look great, is a huge step forward in terms of user experience and brings tons of important improvements. The keynote of Plone's release manager Eric Steele showcases all these things.

Getting Plone 5 ready for a beta-release is the top item on everyone’s agenda. That effort not only involves finishing projects such as mockup, the new theme and contenttypes but also writing documentation for end-users, the upgrade-guide for developers and testing.

The Plone Roadmap

The second biggest thing was the roadmap-discussion. Discussions about the a new Roadmap started in October 2013 at the Plone Conference in Brazil under the catchy title "Plone Roadmap 2020". The initial topic was the future of Zope within Plone's ecosystem, but it turned into a broader discussion about all the things that the community wants to change in Plone, not forgetting the whys and hows.

Roadmap progress in Bristol 2014

First we looked at reasons why we use Plone. The following motivations were mentioned the most:

  • The Plone Community
  • Fully-featured user friendly product that is directly usable
  • Flexibility of the software architecture
  • It creates jobs
  • Best security-record
  • Open source

As Martin Aspelli put it so eloquently: If we are going to change Plone, can we please not mess any of the above up?

Next we compiled a list of things that we want to change in Plone in the next five years.

  • python-api
  • json-api
  • Improve end user experience
  • Documentation and training: Document the recommended way to do things
  • Improve TTW-story in theming, templating and customization
  • Simplify code base, reduce number of eggs, remove legacy technologies
  • Remove dependency on CMF
  • Roadmap communication

Not a single person advocated a rewrite since it is clear that a rewrite would mess with some things on the first list.

When looking closer at the aims it becomes clear that all of these things are already being worked on:

  • python-api: There is now a Plone Improvement Proposal to ship plone.api with Plone and also use it in the core. There will be further discussions about what we need from an api to further isolate Plone from parts that we might want to replace in the future.
  • json-api: The community is already working on a RESTful json-api for Plone that will allow isolating the front-end from the backend and experiment with various javascript-frameworks. With this api Plone will be better suited for for web-applications where Plone is "only" used as the backend and will be a friendly citizen in a world of mixed technologies.
  • UX: Plone 5 will already be a huge step in the right direction but there will always be room for improvement. The enthusiastic reaction to the efforts of the Plone Intranet Consortium showed how important a good UX is.
  • Documentation: The new docs.plone.org are much better than anything Plone had in the past. During the sprint the documentation was already being cleaned of examples that use grok. Also the documentation for the Mastering Plone Training teaches always the recommended ways to do things and will be expanded and upgraded for Plone 5.
  • TTW-Story: Although Mosaic (the new name for Deco) looks extremely promising this is surely one of the areas where Plone still needs to invest more. For template-customization we still rely on old technologies: The only way to customize a viewlet within Plone is to use the ZMI (not recommended!). The community will have to agree on achievable solutions to have some real progress. But do not forget that Plone already has a great TTW-story: Dexterity and the Diazo theme-editor are powerful features and blow the competition out of the water.
  • Simplify code base, reduce number of eggs: Many technologies (formlib, portal_skins, plone-tools, cpy/cpt) are already deprecated and the code is migrated to browserviews and z3c.form (*cough*) in Plone 5. There is also a PLIP that aims to move many plone.app-packages into the core-package Products.CMFPlone [https://dev.plone.org/ticket/13283].
  • Remove dependency on CMF: Without a python-api we cannot remove parts of CMFCore and Zope. Having that api will give us the options we need to replace/remove stuff. We'll need some experiments and even more discussions about what we want to remove and what to replace it with.
  • Roadmap communication: The discussion on a the Plone Roadmap will be continued. The next Plone Open Garden (April 2015 in Sorrento) will probably be turned into a Plone Strategic Planning Summit. Communicating the community's strategic vision of Plone in the short, mid and long term to the public is almost as important as agreeing on a roadmap and implementing it.

Since there is no official roadmap-team these kind of meetings are where the future of Plone is actually agreed on. This is where Plone happens and everyone who takes part is part of the roadmap-team.

The Plone Intranet Consortium

One of the most anticipated talks of the conference was about the Plone Intranet Consortium. Since its foundation last year, the consortium (of which Starzel.de is a founding member) is working on a competitive Social Intranet solution. One of the aims is to evolve and strengthen the position of Plone in a commoditized market. The design-first approach and the fact that Plone-companies put their resources together allows for high expectations. See the talk:

 

The sprint

As usual after the conference there were two days to actually work on Plone. The hotel had to quickly open another big room to accommodate all sprinters since many more people showed up this time than  expected. On saturday morning you could see the titanpad exploding when people started adding the topics they wanted to work on. I worked with a great team of developers on the default contenttypes of Plone 5, mainly focusing on making it easy to migrate the content of existing websites to Dexterity. Other results can be seen at the titanpad and here.

In 2015 the Plone Conference will be in Bucharest. I wouldn't miss it for the world.

 

How to embed self signed certs and how to avoid not verifying https URLs with git

erstellt von pgerken — zuletzt geändert: 2014-04-11T14:07:33+01:00
Working with self signed certificates can be hard. Here is how you can avoid forgoing certificate validation in git with self signed certificates.

One of our Clients has high demands on the protection of its data and code.

They run their own, self hosted git server and it is only accessible via https.

The original suggestion to make it work, was to deactivate ssl certificate validation on our computers. Luckily, git allows for a simpler solution, even though it's not straight forward to get there.

First, you need the public key of the certification authority that signed the the ssl key the server uses. A self signed certificate signed is its own certification authority. The proper way would be to ask the client to provide the certificate via a secure channel. The improper way is to just download the certificate. Improper, because how do you know it is the right certificate. It is still better than not doing certificate validation at all.

Getting the certificate the insecure way

You can ask a webserver to return its certificate and CA certificates with openssl:

openssl s_client -showcerts -connect yourserver:443 </dev/null

This returns a lot of data you don't want all of it, only the last block that starts with "BEGIN CERTIFCATE" and ends with "END CERTIFICATE". Here is the same command with a pipe to a little awk script to do just that:

openssl s_client -showcerts -connect yourserver:443 </dev/null | awk '/-----BEGIN CERTIFICATE-----/ {start=1; cert=""};/-----END CERTIFICATE-----/ {start=0; cert=cert $0; };{if (start) cert=cert $0 "\n"}; END {print cert}' > yourserver.pem

Now that we have the certificate, we must teach git to use it.

We can teach git to use the certificate, but if we use it globally, we would not be using the standard CA certificates any longer. So we can only use the certificate for specific repositories.

While we can easily modify the configuration for a specific git checkout, we must first check it out.

How to use the new certificate with git

To bootstrap git, you enter:

GIT_SSL_CAINFO=yourserver.pem git clone https://YOURSERVER.../

Now, we can add the specific entry to the configuration in PROJECT/.git/config at the end of the file

[http]
    sslCAInfo = /home/mememe/project/yourserver.pem

Please be aware that you must enter the full path.

Now you are done and can securely use git pull and push.

How to use it with mr.developer

Bootstrapping a buildout with mr.developer can be done with the same prefix as the git clone. Unfortunately it will NOT work if you have mixed git repositories and need more certificates, for example for github https urls. You could avoid it by downloading the github certificate too, and attach it to the already existing pem file.

A reminder about catalog-indexes

erstellt von Philip Bauer zuletzt geändert: 2014-05-21T05:56:13+01:00
We often create new indexes to make content searchable in Plone. Many developers still do it wrong.

Most new sites we create contain facetted search based on eea.facetednavigation. Thus we often create new indexes to make content searchable. One example is www.idea-frankfurt.eu where attributes and relations of projects are used to make them searchable in a useful way.

Almost everything that is to be said about indexes can be found in http://docs.plone.org/develop/plone/searching_and_indexing/indexing.html

But there was a known pitfall when registering indexes in catalog.xml that was only fixed in Plone 4.3.2. Even though Maurits van Rees warned about this in a blogpost which is also reference in the documentation I often see developers making that mistake.

Unless working with Plone 4.3.2+, you should never register a index in catalog.xml because the index will be purged when reinstalling your package. Instead register new indexes in your setuphandlers.py. This was fixed in GenericSetup 1.7.4.

I firmly believe that addons have to be reinstallable without ruining a site.

I extended the method written by Maurits to separate the name of the index and the indexed method (useful when dealing with old code):

# -*- coding: UTF-8 -*-
import logging
from Products.CMFCore.utils import getToolByName
PROFILE_ID = 'profile-my.package:default'


def setupVarious(context):

    # Ordinarily, GenericSetup handlers check for the existence of XML files.
    # Here, we are not parsing an XML file, but we use this text file as a
    # flag to check that we actually meant for this import step to be run.
    # The file is found in profiles/default.

    if context.readDataFile('bgp.webcode_various.txt') is None:
        return

    # Add additional setup code here
    logger = context.getLogger('my.package')
    site = context.getSite()
    add_catalog_indexes(site, logger)


def add_catalog_indexes(context, logger=None):
    """Method to add our wanted indexes to the portal_catalog.

    @parameters:

    When called from the import_various method below, 'context' is
    the plone site and 'logger' is the portal_setup logger.  But
    this method can also be used as upgrade step, in which case
    'context' will be portal_setup and 'logger' will be None.
    """
    if logger is None:
        # Called as upgrade step: define our own logger.
        logger = logging.getLogger('my.package')

    # Run the catalog.xml step as that may have defined new metadata
    # columns.  We could instead add <depends name="catalog"/> to
    # the registration of our import step in zcml, but doing it in
    # code makes this method usable as upgrade step as well.  Note that
    # this silently does nothing when there is no catalog.xml, so it
    # is quite safe.
    setup = getToolByName(context, 'portal_setup')
    setup.runImportStepFromProfile(PROFILE_ID, 'catalog')

    catalog = getToolByName(context, 'portal_catalog')
    indexes = catalog.indexes()
    # Specify the indexes you want, with
    # ('index_name', 'index_type', 'indexed_attribute')
    wanted = (('myindex', 'FieldIndex', 'getMyAttribute'),
              )
    indexables = []
    for name, meta_type, attribute in wanted:
        if name not in indexes:
            if attribute:
                extra = {'indexed_attrs': attribute}
                catalog.addIndex(name, meta_type, extra=extra)
            else:
                catalog.addIndex(name, meta_type)
            indexables.append(name)
            if not attribute:
                attribute = name
            logger.info("Added %s '%s' for attribute '%s'.", meta_type, name, extra)
    if len(indexables) > 0:
        logger.info("Indexing new indexes %s.", ', '.join(indexables))
        catalog.manage_reindexIndex(ids=indexables)

By the way: Besides many other amazing features the package ftw.upgrade also has methods catalog_rebuild_index, catalog_add_index and catalog_remove_index you an use in your upgrade-steps.