Profil du prestataire seble21

Informations générales sur le prestataire seble21

Nickname : seble21
Type de structure : freelance qualifié
Date inscription : 04/03/2010
Dernière fois en ligne : 10/03/2010
Classement : classé 39 605ème sur 95 261 prestataires classés

Tags compétences

EXPERT JAVA EXPERT SQL RUBY ON RAILS CONCEPTION DE SITES WEB EXPERT TESTING EXPERT EN CYLCLE DE DEVELOPMENT AGILE

Profil détaillé du prestataire seble21

Domaines de compétence
Languages
Java (J2SE), Ruby, Perl, Python, C/C++, Delphi, SQL, HTML, Javascript, Postscript, Css
Frameworks & APIs
J2EE, Spring 2, Hibernate, JDO, JDBC, XSLT, Castor, Mule, JMS, JMock, JUnit, JMX, Struts, JSP, SWING, JRuby, Jython, SOAP, CORBA, JNI
Concepts
OOA, OOD, UML, Xtreme programming, Agile programming, Multi-threading, Quality management, Test Driven development, Distributed systems, AOP, SOA
Programs/Tools
Subversion, Version Manager, Tracker, CVS, Sourcesafe, Tomcat, Apache, JBoss, QA run/director, Eclipse, Jbuilder, IntelliJ, Maven, Ant, Oracle Application Server 10g, Optimise it, Toghether, Drools, ActiveMQ, Clover, Emma, Matlab compiler, Compuset, Fisheye, Jira, Crucible, Hyperic, Cruise control
Databases
Sybase ASE, Sybase IQ, Oracle, SQL server, MySql, DB2
Operating Systems
Windows, Solaris, Linux
Expérience professionnelle

Dec 2005 – April 2009 BNP Paribas - Market Risk IT

Technical Leader

Unitary Release Manager

My time was split between managing projects and taking an active part in the development effort.

As a Unitary Release Manager, my responsibilities included:

- Manage the weekly release cycle

- Estimate and prioritize the work to be done with users, team leaders and support team, and assign that work to members of the unitary release team (3 persons).

- Report work done (effort, number of features implemented, number of bug fixes), and give forecast on delivery time.

- Ensure that System integration tests and User acceptance tests have passed.

- Liaise with the release manager to check the artefacts that will be deployed, and conduct an impact analysis.

- Manage patches, urgent production related fixes, urgent configuration changes requested by users.

- Take on board a CMMI level 2 initiative, by putting in place the relevant process, documentation and metrics.

As a Technical Leader my responsibilities were the following:

- Coach junior members of the team.

- Participate to technical specifications and their validation.

- Find technical solutions, and ensure that a homogenised approach is used for common problems, and identify new technologies that would be beneficial for the team, and organize proof of concepts,

- Manage and take an active part in projects that are part of a major release. A project is on average 100 Man-days, and lasts 2-3 months.

- Take a very active role in coding the critical parts of the Market Risk System.

- Conduct code reviews.

Example of some projects that I managed and coded:

Script Processing Engine

This is a scalable Java/Spring framework that let our super users run Jython scripts in a controlled manner.

Our super users need to do a lot of tactical developments, mainly to accommodate quality of the feeds coming in the Market Risk System. This framework control access to resources (files, database connections), and manages dependencies needed for a script execution using Drools (open source rule engine). It is slowly replacing ad hoc tools used for daily production, running currently on user’s workstations and incurring high operational risks.

Data grids

As data volumes grow quite fast we are now in a position where some of our components that need to cache data are consuming large quantity of memory (for example deal information uses 8GB of memory) and sometimes the data needs to be shared across components.

To try to address this problem I started looking into the data grid technology. A few products were evaluated (Terracotta, Coherence, GridGain), and I organized meetings and workshops with relevant consultants and team members. We finally opted for Coherence, and are going to start the detailed design of our system using this tool.

Data Loading and Transformation

I took part in the design and implementation of a mission critical data loading and transformation chain of 8 components used to load the data needed to calculate VaR (value at risk) for the whole bank. These components have to be highly scalable, reliable and resource conscious, having to cope with exponentially increasing data volume (from 10 millions rows per day two years ago to 200 millions rows in 25000 files today).

We used a test-driven approach to gradually deliver new components written in Java 5 and decommission old ones. We took advantage of Spring and the inversion of control pattern to produce high quality test suites that are easy to maintain. That is giving us great confidence when developing additional functionalities, or refactoring. A nightly build that runs these test suites plays also a crucial role in our success. Maven and Cruise Control are the tools we use to achieve this.

All components having a limited set of responsibilities, they need to communicate with each other, mainly to pass on work to be done. Communication occurs via an ESB based on Mule and ActiveMQ. Messages received are transformed into a unit of work, and passed on to a worker thread, each component having a pool containing between 4 and 6 threads depending on configuration.

The bulk of the data is deal level risk, and it needs to be extracted from the files produced by front office systems, and information they contain transformed, filtered, and mapped to internal identifiers. The data ultimately goes into a Sybase IQ database, and a Sybase ASE is used for lower volume data and integration with legacy modules.

Since the Market Risk System operates 23h/24h 6d/7d, reliability, support and capacity are issues that need to be addressed. I introduced Spring AOP to gather different performance metrics, such as cache hit ratios. I also added JMX consoles that really help us support the applications in development and production, and used Hyperic to collect data for alerting and capacity planning.

Jun 2005 – Dec 2005 Sungard

Consultant

Working along with the development team on a highly effective financial time series management system. My responsibility was developing the JMS and the SOAP layers.

Jun 2004 – Jun 2005 Algorithmics

Senior Java Developer Operational Risk development team

Enhanced the reporting framework for more flexibility by including a report description file in XML along with a Java/JSP layout engine.

Optimised the report calculations done in Java as well as the data retrieval that is done through a JDO layer (kodo implementation). Some reports were made as SQL views.

Implemented new features and re-factored others both on the client and the server sides, mainly related to the data collection web interface, and data loading, using Java, JSP, Struts, XML.

Completed a data warehouse case study with Oracle and Mondrian for the next OpRisk system generation. Wrote and used Ant scripts to build and deploy the application, to do unit tests, load tests, and database deployment.

I did a case study to see how we could use Cocoon, XSL, and FOP to export reports in different format, mainly for printing purposes (PDF).

Aug 2000 – Jun 2004 Ivorium Software, Paris

Senior Developer

Mission at Geoservices S.A., France (30 months, Oil drilling industry).

I managed a consultant team of 4 people on client site working for the development team of Geonext, a new generation mud logging system running on Windows. My responsibilities included tasks assignation, advancement and design reviews, as well as code reviews.

I designed and developed many modules (servers / clients / GUIs) with UML, Java, C++, Corba and Swing. Geonext has a distributed architecture using Corba mainly as a messaging/notification layer between all the modules.

Most of the applications had to react to real-time events and were relying heavily on multi-threading. Some were designed to handle real-time computations based on high level math models, with data coming from sensors up to 50 Hz. One of these applications was the first in the world to detect kicks on a floating rig.

I implemented the migration from pure java serialization to xml persistence using Castor, which was later moved to SQL server persistence for some modules.

I put in place version manager/tracker tools and set-up the internal processes to handle bugs, enhancements and version release.

I complied Matlab source into dlls and linked them to our java applications with JNI.

Mission at Maxess GmbH, Germany (6 months, Supermarkets)

I developed a Java/Swing application used by salesmen to take orders from supermarkets. This application was running on a tablet PC, and used DB2 replication to synchronize the database with the central ERP.

Mission at TrueScope Technology , USA (4 months, Software engineering tools)

Part of the team that designed and developed Intent tm, a tool for software family management using Java, Swing, MySql, Oracle lite, Formula One.

Sep 1997 – Aug 2000 SITA sc, Paris

Software Developer

I implemented software in C++ to created tagged files from the Oracle billing database that were then transformed in Postscript by Compuset to produce bills.

I designed and implemented a client/server Delphi application used to create/retrieve on demand pdf versions of the bills that was used worldwide by account managers.

I designed and implemented administration tools in Delphi to manage users rights and to follow the bills production and printing process.

Etudes

1996 DESS Systeme d'information multimedia (IAE Amiens)

1995 Maitrise d'informatique (Paris VI)

1994 License d'informatique (Paris VI)

1993 DUT d'informatique (Paris XIII)

1991 Baccalaureat D

 

 

 

 

Projets réalisés par seble21

Portfolio en ligne du prestataire 'seble21'

Pas d'éléments dans le portfolio de ce prestataire

Références clients certifiées



D'autres prestataires aux compétences similaires

(g)