What is the required data storage volume which we can expect on the vessel (is the data stored on the vessel or in the cloud?)

All raw data is kept on the ship as primary data source and is stored safely on the HOMIP2 devices flash memory. With a 64GB flash drive one can expect a log retention of at least 2 years in a standard performance monitoring solution (300 signals logged every minute in average). Before we transmit the data to shore we pre-aggregate the data in order to optimize the data volume with regard to the used transmission volume. Therefore the exported data usually features lower time resolution compared to what would be available on the ship, though this is customer configurable. From this point of view the cloud storage is in most cases a down sampled mirror of the database on board.

Which products are utilised and how do you manage your cloud security?

Amazon S3 and Amazon DynamoDB are used for storage. Data in S3 is encrypted at rest using the SSE-KMS method described here. The underlying block cipher used is AES-256. For DynamoDB also, the data is encrypted at rest as described here. Regarding cloud security management, as there four major components:

  1. From a user management perspective for our APIs, a fine-grained Role-Based Access Control model that uses Amazon Cognito with Amazon API Gateway for authentication and authorization is used.
  2. Inside AWS, all our cloud resources use AWS Identity and Access Management (IAM) for implementing least privilege access.
  3. Hoppe uses security groups where possible to restrict access as well.
  4. Hoppe uses AWS Secrets Manager for handling secrets internally.
We have been receiving a 429 Too Many Requests response code when sending multiple requests simultaneously across several vessels. In the short term, we can limit the requests to only registered vessels that are confirmed to have data. However, as more vessels are brought onto our platform, we may run into these limits in a more significant way, especially as we will continue to utilize a parallelized workflow. What is the API specification on the rate limiting threshold and retry periods?

Yes, the APIs are rate limited per default. This is to encourage users to also use local caching technologies on slowly changing data. However, the limits are set per API key and can always be adjusted to the customer’s needs. In order to adjust the limits we would like to ask you to provide us with number about your expected usage pattern. The information we would need is:

  • regular requests/second
  • max. burst requests/second
  • requests per month.

With these numbers at hand we can adjust your account settings accordingly.

We have tried to gain access to the signals API but we are still getting the response: "403 Forbidden" Is this expected? It would be very helpful for us to have the full signal schema so that we can integrate generally with the API.

Could you please check again that you subscribed for the Signals-API in the developer portal and that you provide your personal API key (to be obtained at https://docs.hoppe-sts.com/ dashboard) via the x-api-key header field in your GET request? Please also check that you can retrieve data via the “Try it out” functionality in the developer portal. A sample request in CURL would look like: „curl -X GET „https://api.hoppe-sts.com/signals/collections/hoppe/signals“ -H „accept: application/json“ -H „Authorization: “ -H „x-api-key:“

Will the signals API be versioned? In other words, is the current API version "1" and will any changes to the signals structure and mappings be named and released separately?

Any backwards compatibility breaking changes will be announced and versioned by path versioning. For the moment we have on our development roadmap only schema additions which we take to not be breaking changes but minor releases. These will not be reflected in a version scheme.

Can I get some assistance by Hoppe Marine in terms of setup and configuration, in case I have no experience in retrieving data via APIs?

Hoppe Marine is delighted to assist during the initial setup. For this purpose, an initial consultation meeting will be held to discuss all requirements and to provide possible short-term solutions for an optimized utilization of the Data-Pool-Services. After that, we can decide how Hoppe Marine can assist the client with further experiences. In any case, after the initial consultation meeting the full access to the system and interface documentation is granted.

Is it possible to have access to a sample data set, in order to replicate the live API before the vessels is online?

A representative data sample will be provided upon request. This data set is representative simulated ship data worth one month of time. The available set of signals (database columns) as well as the data aggregation for a specific vessel might differ from the sample data set. All these configurations (data aggregation rate, data export rate and alike) are at the customers disposal and can be configured and adjusted upon customer needs.

How often new data will be available via API?

This behaviour is fully customer configurable. Data update rates range from two minutes to once per day. This is always a trade-off between used data volume and realtime data requirements. This configuration can be made during the early project planning phase for the initial installation. Changes in this configuration are possible later on. A typical use case with sufficient high resolution for data evaluation and fleet optimization the export logging rate of 1 min and the export interval and transmission is of 5 min.

Where can I get information about the APIs?

Hoppe Marine offers a developer portal for clients. Via docs.hoppe-sts.com the portal can be reached worldwide 24/7. After successful registration, the client’s developer have access to all necessary API details and functionalities. Prior to first use a user account must be set up by our Hoppe Marine administrators.

Is it possible to export data for further individual work with Excel?

Offering raw data in various formats (e.g. .csv and .xls) is planned as a new feature still in 2020. Today, raw data is offered as optimized SQLite data or in JSON format.

How can the information about data quality be retrieved?

Hoppe Marine offers a simple interface for retrieving information about data quality. The interface is described in the developer documentation. Information about data quality of every single data point as well as of entire signal groups can be retrieved via this interface.

Is the data also retrievable by Fleet Management systems / Fleet Optimization providers?

In principle, every IT-system that communicates with REST-APIs is capable of retrieving data from the Hoppe Data-Pool. Therefore interfaces to Business Intelligence Tools, like Tableau, but also Elasticsearch or classic ISO-SQL-conformal data base systems via JDBC can be provided. Please contact our Hoppe Marine system administrators for technical details.

Is the provision of historical data possible?

The APIs provide all data files which are available for the specific vessel. E.g. if data recording with the embeddded iPC HOMIP2 on board started two years before contract was signed, this data can be made available on the shore side API upon customer request. Further more, see „Is it possible to load external or historic data into the data pool via interface?“ in the Ship-to-Shore FAQ.

Can the choosen data package be changed after initial setup of interfaces between Basic Data and Quality Data?

Yes, the subscription model can be changed and optimized afterwards according to the client’s needs. Beside this, get in contact to learn to know our SDK.