1. Use Postman or any other software for Rest Api call to the ThingWorx.

2. Create a query in Postman with following parameters:

  1. Type: POST
  2. URL: https://<IP>:<PORT>/Thingworx/Users/<UserName>/Services/AssignNewPassword
    • <IP>: IP of the server where ThingWorx is installed.
    • <PORT>: Port on which ThingWorx is running (if required).
    • <UserName>: User Name of the user whom Password is to be reset.
  3. Headers:
    • appkey : Your Administrator App key or App key of user having Permission for AssignNewPassword Service for the user.
    • Content-Type: application/json
  4. Body:
    • {

3. Send the Query.

4. Login using new Password.

Join us on March 9, 11am EST for an interactive webcast where we share best practices in IIoT development and explain how data modeling can be best applied when building complex IIoT solutions.



When creating an Industrial Internet of Things (IIoT) application, many developers start by first thinking about connectivity, sensors, or even the user interface. But what about the Data Model?  The Data Model is the basic structure of your information and how you store, manipulate, and interact with it. Often, this is left to later stages with the hopes that it grows organically out of the application’s needs.

Best practices for IIoT application development recommend that the Data Model is one of the first things to think about. Decide who needs to interact with the data and what specific requirements they’ll have. Think about modularity, reusability and future updates. Consider how your application’s data will interface with external system and what impact external data has for the “standardization” of your Data Model.

Join us for an interactive webcast where we share best practices in IIoT development and explain how data modeling can be best applied when building complex IIoT solutions.

What you’ll take away:

  • A quick introduction into Data Modeling, its benefits, and when to use it
  • A better understanding of the unique requirements of IIoT applications and best practices to help you get started designing your data model
  • A primer on the ThingWorx Thing Model and how to use it to set up the structure of your IIoT application

Register today to learn everything you need to know about designing a data model for your IIoT application in less than 60 minutes.

Reserve your spot: http://demand.ptc.com/thingworx-registration?key=460DCD7D965BAFB43E92C9D324EE596A&eventid=1360344

Last summer we released a first version of the eSupport Portal, aimed at ThingWorx Customers and Partners.

The page was well received, had several thousands of visits, and is used by most users of ThingWorx Platforms as a first port of call for support.


We processed feedback from users around the globe and are now happy to announce a new version of the page.

Key motivations for the recent updates were:

  • Placing most relevant support tools and features at the fingertips of all users of all ThingWorx platforms
  • A more consistent user experience with other ThingWorx.com pages
  • Improved visibility and management of Community- and service interactions


This 2 minute video introduces key elements of the new portal:



Thank you to all who shared feedback in the past. The ongoing design of the page is based on the needs of those who use it, so please let us have your reactions and suggestions by responding below, or by using the feedback widget in the page itself.

Timers and schedulers can be useful tool in a Thingworx application.  Their only purpose, of course, is to create events that can be used by the platform to perform and number of tasks.  These can range from, requesting data from an edge device, to doing calculations for alerts, to running archive functions for data.  Sounds like a simple enough process.  Then why do most platform performance issues seem to come from these two simple templates?


It all has to do with how the event is subscribed to and how the platform needs to process events and subscriptions.  The tasks of handling MOST events and their related subscription logic is in the EventProcessingSubsystem.  You can see the metrics of this via the Monitoring -> Subsystems menu in Composer.  This will show you how many events have been processed and how many events are waiting in queue to be processed, along with some other settings.  You can often identify issues with Timers and Schedulers here, you will see the number of queued events climb and the number of processed events stagnate.


But why!?  Shouldn't this multi-threaded processing take care of all of that.  Most times it can easily do this but when you suddenly flood it with transaction all trying to access the same resources and the same time it can grind to a halt.


This typically occurs when you create a timer/scheduler and subscribe to it's event at a template level.  To illustrate this lets look at an example of what might occur.  In this scenario let's imagine we have 1,000 edge devices that we must pull data from.  We only need to get this information every 5 minutes.  When we retrieve it we must lookup some data mapping from a DataTable and store the data in a Stream.  At the 5 minute interval the timer fires it's event.  Suddenly all at once the EventProcessingSubsystem get 1000 events.  This by itself is not a problem, but it will concurrently try to process as many as it can to be efficient.  So we now have multiple transactions all trying to query a single DataTable all at once.  In order to read this table the database (no matter which back end persistence provider) will lock parts or all of the table (depending on the query).  As you can probably guess things begin to slow down because each transaction has the lock while many others are trying to acquire one.  This happens over and over until all 1,000 transactions are complete.  In the mean time we are also doing other commands in the subscription and writing Stream entries to the same database inside the same transactions.  Additionally remember all of these transactions and data they access must be held in memory while they are running.  You also will see a memory spike and depending on resource can run into a problem here as well.


Regular events can easily be part of any use case, so how would that work!  The trick to know here comes in two parts.  First, any event a Thing raises can be subscribed to on that same Thing.  When you do this the subscription transaction does not go into the EventProcessingSubsystem.  It will execute on the threads already open in memory for that Thing.  So subscribing to a timer event on the Timer Thing that raised the event will not flood the subsystem.


In the previous example, how would you go about polling all of these Things.  Simple, you take the exact logic you would have executed on the template subscription and move it to the timer subscription.  To keep the context of the Thing, use the GetImplimentingThings service for the template to retrieve the list of all 1,000 Things created based on it.  Then loop through these things and execute the logic.  This also means that all of the DataTable queries and logic will be executed sequentially so the database locking issue goes away as well.  Memory issues decrease also because the allocated memory for the quries is either reused or can be clean during garbage collection since the use of the variable that held the result is reallocated on each loop.


Overall it is best not to use Timers and Schedulers whenever possible.  Use data triggered events, UI interactions or Rest API calls to initiate transactions whenever possible.  It lowers the overall risk of flooding the system with recourse demands, from processor, to memory, to threads, to database.  Sometimes, though, they are needed.  Follow the basic guides in logic here and things should run smoothly!

  • Embedded databases come with the installation of the ThingWorx Platform
    • No additional installation or configuration is required for embedded databases
    • Read about the various benefits and pitfalls of embedded versus external below
  • Database Options
    • H2
      • RDBMS (relational database management system), written in Java
      • Has a small memory footprint
      • Embedded into ThingWorx for easy installation
      • Not as robust as other database options
      • Not scalable in production environments (unless used alongside a separate, external database for stream, value stream, and other data)
        • See KCS Article CS243975 for further reading on the use of external databases
      • Meant to be used for quick deployments and testing environments
    • PostgreSQL
      • ORDBMS (object-relational database management system), written in C
      • PostgreSQL is the ThingWorx recommended database for production systems
      • More Robust
        • External database installed separately from ThingWorx
        • Beneficial because external databases can be specifically configured for use in production, while embedded databases cannot
        • Able to efficiently handle larger amounts of data and store more data without affecting ThingWorx system performance
      • Greater Stability
        • Recover from data corruptions more easily by accessing the database from an external application (separate from ThingWorx) using simple SQL statements
        • Easier to back-up the database in case of issues (further reading in KCS Article CS246598)
        • Less risky and simpler upgrade procedure, which occurs "in-place"
          • Instead of exporting and importing data and entities, a simple schema update allows these to automatically persist into the new version
          • If ThingworxStorage folder is accidentally deleted, entities and data are secure in the external database
      • More Secure
        • HA (High Availability) allows for multiple server instances at different locations in the network
          • Assists in time of failover, i.e. if one server fails, the other can immediately take over
          • Secures the data and prevents further data loss in the event of a failure
        • Customizable security settings and complex password requirements
        • Fewer security vulnerabilities than other databases
      • Because Postgres is an external database, it can be harder to install
        • Follow the steps in the installation guide closely
        • See KCS Articles CS235937 and CS230085 for troubleshooting and help with installation and configuration
    • Hana
    • Neo4J
      • GDBMS (graph database management system), written in Java
      • Data is not easily accessed by external applications, and CQL must be used instead of SQL, making recovery from corruptions very difficult
      • Embedded database with limited configuration options
      • Known to have issues with deadlocks
      • Deprecated in version 7.0 (related KCS Article: CS228537)
  • For full installation steps for H2 and PostgreSQL, see the ThingWorx Installation Guide

The ThingWorx Platform is fully exposed using the REST API including every property, service, subsystem, and function.  This means that a remote device can integrate with ThingWorx by sending correctly formatted Hyper Text Transfer Protocol (HTTP) requests. Such an application could alter thing properties, execute services, and more.


To help you get started using the REST API for connecting your edge devices to ThingWorx, our ThingWorx developers put together a few resources on the Developer Portal:


  • New to developing with ThingWorx? Use our REST API Quickstart guide that explains how to: create your first Thing, add a property to your Thing, then send and retrieve data.

REST API Quickstart Guide - Developer Portal.png

  • Advanced ThingWorx user? This new REST API how-to series features instructions on how to use REST API for many common tasks, incl. a troubleshooting section.

REST API how to guides - Developer Portal.png

  • Use ThingWorx frequently but haven’t learned the syntax by heart? We got you covered. The REST API cheat sheet gives details of the most frequently used REST API commands.

REST API Cheat Sheet - Developer Portal.png

We are regularly asked in the community how to send data from ThingWorx platform to ThingWorx Analytics in order to perform some analytics computation.

There are different ways to achieve this and it will depend on the business needs.

  • If the analytics need is about anomaly detection, the best way forward is to use ThingWatcher without the use of ThingWorx Analytics server. The ThingWatcher Help Center is an excellent place to start, and a quick start up program can be found in this blog.
  • If the requirement is to perform a full blown analytics computation, then sending data to ThingWorx Analytics is required. This can be achieved by
    • Using ThingWorx DataConnect, and this is what this blog will cover
    • Using custom implementation. I will be very pleased to get your feedback on your experience in implementing custom solution as this could give some good ideas to others too.

In this blog we will use the example of a smart Tractor in ThingWorx where we collect data points on:

  • Speed
  • Distance covered since last tyre change
  • Tyre pressure
  • Amount of gum left on the tyre
  • Tyre size.

From an Analytics perspective the gum left on the tyre would be the goal we want to analyse in order to know when the tyre needs changing.


We are going to cover the following:

DataConnect configuration
ThingWorx Configuration
Data Analysis Definition Configuration
Data Analysis Definition Execution
Demo files


For people not familiar with ThingWorx Analytics, it is important to know that ThingWorx Analytics only accepts a single datafile in a .csv format.
Each columns of the .csv file represents a feature that may have an impact on the goal to analyse.

For example, in the case of the tyre wear, the distance covered, the speed, the pressure and tyre size will be our features. The goal is also included as a column in this .csv file.

So any solution sending data to ThingWorx Analytics will need to prepare such a .csv file.
DataConnect will perform this activity, in addition to some transformation too.




  1. Decide on the properties of the Thing to be collected, that are relevant to the analysis.
  2. Create service(s) that collect those properties.
  3. Define a Data Analysis Definition (DAD) object in ThingWorx.
    The DAD uses a Data Shape to define each feature that is to be collected and sends them to ThingWorx Analytics. Part of the collection process requires the use of the services created in point 2.
  4. Upon execution, the DAD will create one skinny csv file per feature and send those skinny .csv files to DataConnect. In the case of our example the DAD will create a speed.csv, distance.csv, pressure.csv, gumleft.csv, tyresize.csv and id.csv.
  5. DataConnect processes those skinny csv files to create a final single .csv file that contains all these features. During the processing, DataConnect will perform some transformation and synchronisation of the different skinny .csv files.
  6. The resulting dataset csv file is sent to ThingWorx Analytics Server where it can then be used as any other dataset file.




DataConnect configuration

As seen in this workflow a ThingWorx server, DataConnect server and a ThingWorx Analytics server will need to be installed and configured.
Thankfully, the installation of DataConnect is rather simple and well described in the ThingWorx DataConnect User’s guide.
Below I have provided a sample of a working dataconnect.conf file for reference, as this is one place where syntax can cause a problem:


ThingWorx Configuration

The platform Subsystem needs to be configured to point to the DataConnect server .
This is done under SYSTEM > Subsystems > PlatformSubsystem:




DAD Configuration

The most critical part of the process is to properly configure the DAD as this is what will dictate the format and values filled in the skinny csv files for the specific features.

The first step is to create a data shape with as many fields as feature(s)/properties collected.



Note that one field must be defined as the primary key. This field is the one that uniquely identifies the Thing (more on this later).

We can then create the DAD using this data shape as shown below



For each feature, a datasource needs to be defined to tell the DAD how to collect the values for the skinny csv files.
This is where custom services are usually needed. Indeed, the Out Of The Box (OOTB) services, such as QueryNumberPropertyHistory, help to collect logged values but the id returned by those services is continuously incremented. This does not work for the DAD.
The id returned by each services needs to be what uniquely identifies the Thing. It needs to be the same for all records for this Thing amongst the different skinny csv files. It is indeed this field that is then used by DataConnect to merge all the skinny files into one master dataset csv file.

A custom service can make use of the OOTB services, however it will need to override the id value.
For example the below service uses QueryNumberPropertyHistory to retrieve the logged values and timestamp but then overrides the id with the Thing’s name.




The returned values of the service then needs to be mapped in the DAD to indicate which output corresponds to the actual property’s value, the Thing id and the timestamp (if needed).
This is done through the Edit Datasource window (by clicking on Add Datasource link or the Datasource itself if already defined in the Define Feature window).



On the left hand side, we define the origin of the datasource. Here we have selected the service GetHistory from the Thing Template smartTractor.

On the right hand side, we define the mapping between the service’s output and the skinny .csv file columns.

Circled in grey are the output from the service. Circled in green are what define the columns in the .csv file.
A skinny csv file will have 1 to 3 columns, as follow:

  • One column for the ID. Simply tick the radio button corresponding to the service output that represents the ID
  • One column representing the value of the Thing property. This is indicated by selecting the link icon on the left hand side in front of the returned data which represent the value (in our example the output data from the service is named value)
  • One column representing the Timestamp. This is only needed when a property is time dependant (for example, time series dataset). On the example the property is Distance, the distance covered by the tyre does depend on the time, however we would not have a timestamp for the TyreSize property as the wheel size will remain the same.

How many columns should we have (and therefore how many output should our service has)?

  • The .csv file representing the ID will have one column, and therefore the service collecting the ID returns only one output (Thing name in our smartTractor example – not shown here but is available in the download)
  • Properties that are not time bound will have a csv file with 2 columns, one for the ID and one for the value of the property.
  • Properties that are time bound will have 3 columns: 1 for the ID, one for the value and one for the timestamp. Therefore the service will have 3 outputs.

  Additionally the input for the service may need to be configured, by clicking on the icon.pngicon.


Once the datasources are configured, you should configure the Time Sampling Interval in the General Information tab.
This sampling interval will be used by DataConnect to synchronize all the skinny csv files. See the Help Center for a good explanation on this.


DAD Execution

Once the above configuration is done, the DAD can be executed to collect properties’ value already logged on the ThingWorx platform.
Select Execution Settings in the DAD and enter the time range for property collection:



A dataset with the same name as the DAD is then created in DataConnect as well as in ThingWorx Analytics Server





ThingWorx Analytics:



The dataset can then be processed as any other dataset inside ThingWorx Analytics.


Demo files

For convenience I have also attached a ThingWorx entities export that can be imported into a ThingWorx platform for you to take a closer look at the setup discussed in this blog.
Attached is also a small simulator to populate the properties of the Tractor_1 Thing.
The usage is :

java -jar TWXTyreSimulatorClient.jar  hostname_or_IP port AppKey

For example: java -jar TWXTyreSimulatorClient.jar 8080 d82510b7-5538-449c-af13-0bb60e01a129


Again feel free to share your experience in the comments below as they will be very interesting for all of us.

Thank you

Validator widgets provide an easy way to evaluate simple expressions and allow users to see different outcomes in a Mashup.


Using a validator is fairly intuitive for simple expressions, such as is my field blank? But if we need to evaluate a more complex scenario based on multiple parameters, then we can user our validator with a backing service that will perform more complex analytics.


To show how services can work in conjunction with a validator widget, let’s consider a slightly more complicated scenario such as: A web form needs to validate that the zip or postal code entered by the user is consistent with the country the user selected.

Let’s go through the steps of validating our form:

  • Create a service with two input parameters. Our service will use regex to validate a postal code based on the user’s country.  Here’s some sample code we could use on an example Thing:
//Input parameters: Country and PostalCode (strings)
//Country-based regular expressions:
var reCAD = /^[ABCEGHJKLMNPRSTVXY]{1}\d{1}[A-Z]{1} *\d{1}[A-Z]{1}\d{1}$/;
var reUS = /^\d{5}(-\d{4})?$/;
var reUK = /^[A-Za-z]{1,2}[\d]{1,2}([A-Za-z])?\s?[\d][A-Za-z]{2}$/;
var search = "";
//Validate based on Country:
if (Country==="CAD")
               search = reCAD.exec(PostalCode);
else if (Country==="US")
               search = reUS.exec(PostalCode);
else if (Country==="UK")
               search = reUK.exec(PostalCode);
(search == null) ? result = false: result = true;
  • Set up a simple mashup to collect the parameters and pass them into our service

  • Add a validator widget
  • Configure the validator widget to parse the output of the service. ServiceInvokeComplete on the service should trigger the evaluation and pass the result of the service into a new parameter on the widget called ServiceResult (Boolean). The expression for the evaluator would then be: ServiceResult? true:false

  • Based on the output of the validator, provide a message confirming the postal code is valid or invalid
  • Add a button to activate the service and the validator evaluation


Of course, in addition to providing a message we can also use the results of the validator to activate additional services (such as writing the results of the form to the database).


For an example you can import into your ThingWorx environment, please see the attached .zip file which contains a sample mashup and a test thing with a validator service.



H2 database provided as an embedded persistence provider with ThingWorx. However, for those of us interested in peeking inside the structure of how the data is persisted, it's challenging to simply connect to it like other databases which are installed separately e.g. PostgreSQL, SAP Hana, or any other RDBMS connecting to ThingWorx via JDBC.


Disclaimer: This guide is for the purpose of debugging or view only scenarios. Any direct change to the stored data in the H2 database may lead to data corruption leading to data loss or ThingWorx failing to start.


How to connect to embedded H2 Database installation


Challenging as it may appear at first sight, connecting to H2 Database embedded with ThingWorx is quite straight forward. Here's a quick guide to it:


1. Download and Install H2 Database engine


     Visit the official page of H2Database and download the last stable version for your platform


2. Access the H2 Database browser


     Navigate to the H2 Console, for e.g. if installed on Windows OS : Window's Start > H2 Console

3. Connecting to the H2 Database embedded with ThingWorx

     H2 Console is a web based UI and will be launched in a web browser, see following sample screenshot. For connecting to the existing embedded H2 DB used within           ThingWorx, simply provide the location for the \\ThingWorxStorage\Database\ and press Connect without any username and password.

4. Exploring the H2 Console

     Once connected you will be able to view the schema and all the data stored within, like so

Here is a sample to run ConvertJSON just for test

1. Create a DataShape

User-added image

2. There are 4 input for service ConvertJSON

fieldMap (The structure of infotable in the json row)
User-added image
json (json content)
  { "rows":[

rowPath (json rows that indicate table information)
dataShape (infotable dataShape)
User-added image

User-added image

One of the issues we have encountered recently is the fact that we could not establish a VNC Remote session.

The edge was located outside of the internal network where the Tomcat was hosted, and all access to the instance was through an Apache reverse proxy.


The EMS was able to connect successfully to the Server, because the Apache had correctly setup the Websocket forwarding through the following directive:

ProxyPass "/Thingworx/WS/"  "wss://"

However, we saw that tunnels immediately closed after creation and as a result (or so we thought), we could not connect from the HTML5 VNC viewer.

More diagnostics revealed that you need to have ProxyPass directives for the following:

-the EMS will make calls to another WS endpoint, called WSTunnelServer. After you setup this, the EMS will be able to create tunnels to the server.

-the HTML5 VNC page will make a "websocket" call to yet another WS endpoint, called WSTunnelClient.

Only at this step you have the ability to successfully use tunnels through a reverse proxy.

Hope it helps!



Since there have been discussions regarding SNMP capabilities when using the ThingWorx platform, I have made a guide on how you can manage SNMP content with the help of an EMS.


Library used: SNMP4J - http://www.snmp4j.org/


Purpose: trapping SNMP events from an edge network, through a JAVA SDK EdgeMicroServer implementation then forwarding the trap information to the ThingWorx server.

Background: There are devices, like network elements (routers, switches) that raise SNMP traps in external networks. Usually there are third party systems that collect this information, and then you can query them, but if you want to catch directly the trap, you can use this starter-kit implementation.

Attached to this blog post you can find an archive containing the source code files and the entities that you will need for running the example, and a document with information on how to setup and run and the thought process behind the project.



Andrei Valeanu

There are some scenarios where you don't necessarily want to connect to your corporate mail server, or a public mail server like gmail - e.g. when testing a new function that possibly spams the official mail servers - or the mail server is not yet available. In such a scenario it might be a good idea to use a custom, private mail server to be able to send and receive emails locally on a test- or development-environment.


In this post I will show how to use the hMailServer and setup the ThingWorx mail extension to send emails. This post will concentrate on installing and deploying within a Windows environment. More specifically on a Windows 2012 R2 server virtual machine.


Installing hMailServer


Download and install the .NET Framework 3.5 (includes .NET 2.0 and 3.0).

In Windows Server 2012 R2 open the Server Manager and in the Configuration add roles and features.

Click through the "Role-based or feature-based installation" steps and install the ".NET Framework 3.5 Features" in case they are not already installed.


Download hMailServer via https://www.hmailserver.com/download

As always: the latest version is more stable while the beta versions might provide more functionality and additional bug fixes.

This post is based on version 5.6.6-B2383. Functionality and how-to-clicks might change in other versions.


Note: The Microsoft .NET Framework is required for this installation. In case the .NET installation fails by installing it with the hMailServer framework, it's best to cancel the installation and install the required .NET Framework manually instead of the automatic download and installation offered by hMailServer. In case of such a failure it's best to play it safe and uninstall the mail server again, install the .NET framework manually and then re-install hMailServer. (Any left-over directories should be deleted before re-installing)


For the installation, choose your path and a full installation.

Use the built-in database engine, set a password for the administrative user and install.


Configuring hMailServer


The hMailServer Administrator opens automatically after the installation - if not you will find it in the Start menu.

Connect to the default instance on the localhost. The password is the one set up during the installation process.


Add a domain (e.g. mycompany.com) and save it.

The domain will specify the domain of the mail-addresses e.g. user@domain (me@mycompany.com).


In the domain add an account.

Specify the address (e.g. noreply) and set a password (e.g. ts).

Save the new account.


The default port used for SMTP is 25. For POP3 it's 110. This is configured under Settings > Advanced > TCP/IP ports

Ensure the ports for SMPT and POP3 are not blocked by a firewall in case you run into issues later on.


This setup should *usually* work. However there might be hostname specific SMTP issues.

In case something happens / or to avoid errors in the first place, go to Settings > Protocols > SMTP > Delivery of e-mail and specific the Local host name. This should be the fully qualified hostname of the server (e.g. myserver.this.company.com).


Test hMailServer via telnet


Note: telnet needs to be installed for this test - in case it's not installed, Google can help.


Open a command line window and execute: telnet <yourhostname> 25

This will open a connection to the SMTP port of the hMailServer. Manual commands can be send to test if the basic send functions are working.

The following structure can be used for testing - it holds manual input and responses.


Username and password need to be Base64 encoded. See https://www.base64encode.org/ for Base64 conversions.

(Tip: only text, don't add additional spaces or line breaks - otherwise the hash will be quite different!)


Command / ResponseDescription
220 <HOSTNAME> ESMTPConnected to host
HELO mycompany.comConnect with domain as defined in hMailServer
250 Hello.Connected
AUTH LOGINLogin as authenticated user
334 VXNlcm5hbWU6Base64 for "Username:"
bm9yZXBseUBteWNvbXBhbnkuY29tBase64 for "noreply@mycompany.com"
334 UGFzc3dvcmQ6Base64 for "Password:"
dHM=Base64 for "ts"
235 authenticated.Authentication successful
MAIL FROM: noreply@mycompany.comSender address
250 OK
RCPT TO: <your real mail address>To address
250 OK
354 OK, send.
Subject: sending mail via telnetSubject line
Blank line to indicate end of subject
just a simple test!Content
.. indicates the end of mail
250 Queued (10.969 seconds)Mail queued and sent with duration
QUITLog off telnet

221 goodbye


Connection to host lost.

Log off confirmed


Configuring ThingWorx


Download and configure the mail extension


Download the MAIL EXTENSION from the ThingWorx Marketplace



In ThingWorx, click Import / Export > Extensions > Import, choose the downloaded .zip file and import it.

The Composer should be refreshed to reflect the changes introduced by the extension.


The Extension created a new Thing Template: MailServer


Create a new Thing based on the MailServer Template. In its configuration adjust the servername and port to match the hMailServer configuration, e.g. localhost and port 25. Change the Mail User Account and Password to the authentication user (e.g. noreply@mycompany.com / ts). Save the configuration to persist the changes.


In any Thing, create a new Service to send mails and notifications. Insert a snippet based on Entities > <yourMailThing> > Send Message

Call the service manually for an initial functional test. It should look similar to this... but parameters need to be adjusted to your environment:


var params = {
  cc: undefined /* STRING */,
  bcc: undefined /* STRING */,
  subject: "sending email via ThingWorx" /* STRING */,
  from: "noreply@mycompany.com" /* STRING */,
  to: "<your real mail address>" /* STRING */,
  body: "just a simple test!" /* HTML */

// no return


Check your mailbox for incoming messages!


What next?


The mail server can also be used to receive emails.
So instead of sending mails to your regular mail address and risking a ton of spam (depending on your services and frequency of sending automated emails), you could also configure a local Outlook / Thunderbird / etc. installation and send mails directly to the noreply@mycompany.com address. Those mails can then be downloaded from hMailServer via POP3.


With this the whole send AND receive mechanism is contained within a single (virtual) machine.

Filter Blog

By date:
By tag: