How to get a cost effective PCB for Production

Designing a PCB for a DIY project involves in making up the schematics which then turned into a PCB layout. Components used in these PCBs will be mostly “Through Hole” which are commonly available in the market. Once the PCB is printed in either screen printing techniques or using photo resistive dry films, making alterations to the component mounting pads and connections will be somewhat possible.

When dealing with a professional PCB design, there are many properties we need to consider. DIY PCBs will simply be single sided in most cases. A professional printed circuit board will most likely to have more than one layer. The PCB for PSLab device has 4 layers. Adding more layers to a PCB design makes it easier to draw connections. But on the other hand, the cost will increase exponentially. The designer must try to optimize the design to have less layers as much as possible. The following table shows the estimated cost for printing for 10 PSLab devices if the device had that many layers.

One Layer Two Layers Four Layers Six Layers
$4.90 $4.90 $49.90 $305.92

Once the layer levels increase from 2, the other layers will be inner layers. The effective area of inner layers will be reduced if the designer adds more through hole components or vias which connects a connection from a one layer with a connection with another layer. The components used will then be limited to surface mount components.

Surface Mount components (SMD) are expensive compared to their Through Hole (TH) counterpart. But the smaller size of SMD makes it easier to place many components in a smaller area than to Through Hole components. Soldering and assembling Through Hole components can be done manually using hand soldering techniques. SMD components need special tools and soldering equipments to assemble and solder them. Much more precision is required when SMD components are soldered. Hence automated assembly is used in industry where robot arms are used to place components and reflow soldering techniques to solder the SMD components. This emphasizes that the number of SMD components used in the PCB will increase the assembly cost as well as the component cost but it will greatly reduce the size of the PCB.

SMD components comes in different packages. Passive components such as resistors, capacitors will come in 0.25 mm upto 7.4mm dimensions. PSLab device uses 0805/2012 sized package which is easier to find in the market and big enough to pick and assemble by hand. The packaging refers to its dimensions. 0805 reads as 0.08 inches long and 0.05 inches wide.

Finding the components in the market is the next challenging task. We can easily purchase components from an online store but the price will be pretty high. If the design can spare some space, it will be wise to have alternative pads for a Through Hole component for the SMD component as Through Hole components can be found much easier than SMD components in a local store.

The following image is taken from Sparkfun which illustrates different common IC packages. Selecting the correct footprint for the SMD IC and vise versa is very important. It is a good practice to check the stores for the availability and prices for the components before finalizing the PCB design with footprints and sending it to printing. We may find some ICs are not available for immediate purchase as the stocks ran out but a different package of the same IC is available. Then the designer can alter the foot print to the packaging and use the more common packaging type in the design.

Considering all the factors above, a cost effective PCB can be designed and manufactured once the design is optimized to have the minimum number of layers with components with the minimum cost for both assembly and components.

Resources:

Continue ReadingHow to get a cost effective PCB for Production

Creating Bill of Materials for PSLab using KiCAD

PSLab device consists of a hundreds of electronic components. Resistors, diodes, transistors, integrated circuits are to name a few. These components are of two types; Through hole and surface mounted.

Surface mount components (SMD) are smaller in size. Due to this reason, it is hard to hand solder these components onto a printed circuit board. We use wave soldering or reflow soldering to connect them with a circuit.

Through Hole components (TH) are fairly larger than their SMD counter part. They are made bigger to make it easy for hand soldering. These components can also be soldered using wave soldering.

Once a PCB has completed its design, the next step is to manufacture it with the help of a PCB manufacturer. They will require the circuit design in “gerber” format along with its Bill of Materials (BoM) for assembly. The common requirement of BoM is the file in a csv format. Some manufacturers will require the file in xml format. There are many plugins available in KiCAD which does the job.

KiCAD when first installed, doesn’t come configured with a BoM generation tool. But there are many scripts developed with python available online free of charge. KiBoM is one of the famous plugins available for the task.

Go to “Eeschema” editor in KiCAD where the schematic is present and then click on the “BoM” icon in the menu bar. This will open a dialog box to select which plugin to use to generate the bill of materials.

Initially there won’t be any plugins available in the “Plugins” section. As we are adding plugins to it, they will be listed down so that we can select which plugin we need. To add a plugin, click on the “Add Plugin” button to open the dialog box to browse to the specific plugin we have already downloaded. There are a set of available plugins in the KiCAD installation directory.

The path is most probably will be (unless you have made any changes to the installation);

usr/lib/kicad/plugins

Once a plugin is selected, click on “Generate” button to generate the bom file. “Plugin Info” will display where the file was made and it’s name.

Make sure we have made the BoM file compatible to the file required by the manufacturer. That is; removed all the extra content and added necessary details such as manufacturer’s part numbers and references replacing the auto generated part numbers.

Resources:

Continue ReadingCreating Bill of Materials for PSLab using KiCAD

Auto Deployment of Pull Requests on Susper using Surge Technology

Susper is being improved every day. Following every best practice in the organization, each pull request includes a working demo link of the fix. Currently, the demo link for Susper can be generated by using GitHub pages by running these simple commands – ng build and npm run deploy. Sometimes this process on slow-internet connectivity takes up to 30 mins to generate a working demo link of the fix.

Surge is the technology which publishes or generates the static web-page demo link, which makes it easier for the developer to deploy their web-app. There are a lot of benefits of using surge over generating demo link using GitHub pages:

  • As soon as the pull request passes Travis CI, the deployment link is generated. It has been set up as such, no extra terminal commands will be required.
  • Faster loading compared to deployment link is generated using GitHub pages.

Surge can be used to deploy only static web pages. Static web pages mean websites that contain fixed contents.

To implement the feature of auto-deployment of pull request using surge, one can follow up these steps:

  • Create a pr_deploy.sh file which will be executed during Travis CI testing.
  • The pr_deploy.sh file can be executed after success i.e. when Travis CI passes by using command bash pr_deploy.sh.

The pr_deploy.sh file for Susper looks like this:

#!/usr/bin/env bash
if [ “$TRAVIS_PULL_REQUEST” == “false” ]; then
echo “Not a PR. Skipping surge deployment.”
exit 0
fi
angular build production

npm i -g surge

export SURGE_LOGIN=test@example.co.in
# Token of a dummy account
export SURGE_TOKEN=d1c28a7a75967cc2b4c852cca0d12206

export DEPLOY_DOMAIN=https://pr-${TRAVIS_PULL_REQUEST}-fossasia-susper.surge.sh
surge project ./dist domain $DEPLOY_DOMAIN;

 

Once pr_deploy.sh file has been created, execute the file in the travis.yml by using command bash pr_deploy.sh.

In this way, we have integrated the surge technology for auto-deployment of the pull requests in Susper.

References:

Continue ReadingAuto Deployment of Pull Requests on Susper using Surge Technology

Automatically deploy SUSI Web Chat on surge after Travis passes

We are using surge from the very beginning of this SUSI web chat and SUSI skill cms projects development. We used surge for provide preview links for Pull requests. Surge is really easy tool to use. We can deploy our static web pages really easily and quickly.  But If user had to change something in pull request user has to deploy again in surge and update the link. If we can connect this operation with travis ci we can minimise re-works. We can embed the deploying commands inside the travis.yml.

We can tell travis to make a preview link (surge deployment) if test cases are passed by embedding the surge deployment commands inside the travis.yml like below.

This is travis.yml file

sudo: required
dist: trusty
language: node_js
node_js:
 - 6
script:
 - npm test
after_success:
 - bash ./surge_deploy.sh
 - bash ./deploy.sh
cache:
 directories:
   - node_modules
branches:
 only:
   - master

Surge deployment commands are inside the “surge_deploy.sh” file.
In that we have to check the status of the pull request whether it is passing test cases or not. We can do it like below.

if [ "$TRAVIS_PULL_REQUEST" == "false" ]; then
   echo "Not a PR. Skipping surge deployment"
   exit 0
fi

Then we have to install surge in the environment. Then after install all npm packages and run build.

npm i -g surge
npm install
npm run build

Since there is a issue with displaying moving to child routes we have to take a copy of index.html file and name it as a 404.html.

cp ./build/index.html ./build/404.html

Then make two environment variables for your surge email address and surge token

export SURGE_LOGIN=fossasiasusichat@example.com
# surge Token (run ‘surge token’ to get token)
export SURGE_TOKEN=d1c28a7a75967cc2b4c852cca0d12206

Now we have to make the surge deployment URL (Domain). It should be unique so we made a URL that contains pull request number.

export DEPLOY_DOMAIN=https://pr-${TRAVIS_PULL_REQUEST}-susi-web-chat.surge.sh
surge --project ./build/ --domain $DEPLOY_DOMAIN;

Since all our static contents which made after the build process are in “build” folder we have to tell surge to get static html files from that.
Now make a pull request. you would find the deployment link in travis ci report after travis passed.

Expand the output of the surge_deploy.sh

You will find the deployment link as we defined in the surge_deploy.sh file

References:

  • Integrating with travis ci – https://surge.sh/help/integrating-with-travis-ci
  • React Routes to Deploy 404 page on gh-pages and surge – https://blog.fossasia.org/react-routes-to-deploy-404-page-on-gh-pages-and-surge/
Continue ReadingAutomatically deploy SUSI Web Chat on surge after Travis passes

Adding a Last Modified At column in Open Event Server

This blog article will illustrate how, with the help of SQLAlchemy, a last modified at column, with complete functionality can be added to the Open Event Server database. To illustrate the process, the blog article will discuss adding the column to the sessions api. Since last modified at is a time field, and will need to be updated each time user successfully updates the session, the logic to implement will be a slightly more complex than a mere addition of a column to the table.

The first obvious step will comprise of adding the column to the database table. To achieve the same, the column will have to be added to the model for the sessions table, as well as the schema.

In app/api/schema/sessions.py:

...
class SessionSchema(Schema):
   """
   Api schema for Session Model
   """
   ...
   last_modified_at = fields.DateTime(dump_only=True)
   ...

And in app/models/sessions.py:

import pytz
...

class Session(db.Model):
   """Session model class"""
   __tablename__ = 'sessions'
   __versioned__ = {
       'exclude': []
   }
   ...
   last_modified_at = db.Column(db.DateTime(timezone=True),   
   default=datetime.datetime.utcnow)
   def init(self, ..., last_modified_at=None))
     #inside init method
     ...
     self.last_modified_at = datetime.datetime.now(pytz.utc)
     ...

NOTE: The users for the open event organiser server will be operating in multiple time zones and hence it is important for all the times to be in sync, hence the open event database maintains all the time in UTC timezone (python’s pytz module takes care of converting user’s local time into UTC time while storing, thus unifying the timezones.) From this, it directly follows that the time needs to be timezone aware hence timezone=true is passed, while defining the column.

Next, while initialising an object of this class, the last modified time is the time of creation, and hence

datetime.now(pytz.utc) is set as the initial value which basically stores the current time in UTC timezone format.

Finally, the logic for updating the last modified at column every time any other value changes for a session record needs to be implemented. SQLAlchemy provides an inbuilt support for detecting update and insert events which have been used to achieve the goal. To quote the official SQLAlchemy Docs,  “SQLAlchemy includes an event API which publishes a wide variety of hooks into the internals of both SQLAlchemy Core and ORM.

@event.listens_for(Session, 'after_update')
def receive_after_update(mapper, connection, target):
  target.last_modified_at = datetime.datetime.now(pytz.utc)

The listens_for() decorator is used to register the event according to the arguments passed to it. In our case, it will register any event on the Session API (sessions table), whenever it updates.

The corresponding function defined below the decorator, receive_after_update(mapper, connection, target) is then called, and session model (table) is the the registered target with the event. It sets the value of the last_modified_at to the current time in the UTC timezone as expected.

Lastly, since the changes have been made to the database schema, the migration file needs to be generated, and the database will be upgraded to alter the structure.

The sequence of steps to be followed on the CLI will be

> python manage.py db migrate
> python manage.py db upgrade

Resources

Continue ReadingAdding a Last Modified At column in Open Event Server

Deployment terms in Open Event Frontend

In Open Event Frontend, once a pull request is opened, we see some tests running on for the specific pull request like ‘Codacy’, ‘Codecov’, ‘Travis’, etc. New contributors eventually get confused what the tests are about. So this blog would be a walkthrough to these terms that we use and what they mean about the PR.

Travis: Everytime you make a pull request, you will see this test running and in some time giving the output whether the test passed or failed. Travis is the continuous integration test that we are using to test that the changes that the pull request you proposed does not break any other things. Sometimes you will see the following message which indicates that your changes is breaking something else which is not intended.

Thus, by looking at the Travis logs, you can see where the changes proposed in the pull request are breaking things. Thus, you can go ahead and correct the code and push again to run the Travis build until it passes.

Codacy: Codacy is being used to check the code style, duplication, complexity and coverage, etc. When you create a pull request or update the pull request, this test runs which checks whether the code followed certain style guide or if there is duplication in code, etc. For instance let’s say if your code has a html page in which a tag has an attribute which is left undefined. Then codacy will be throwing error failing the tests. Thus you need to see the logs and go correct the bug in code. The following message shows that the codacy test has passed.

Codecov:

Codecov is a code coverage test which indicates how much of the code change that is proposed in the pull request is actually executed. Consider out of the 100 lines of code that you wrote, only 80 lines is being actually executed and rest is not, then the code coverage decreases. The following indicates the codecov report.

Thus, it can be seen that which files are affected by what percent.

Surge:

The surge link is nothing but the deployment link of the changes in your pull request.

Thus, checking the link manually, we can test the behavior of the app in terms of UI/UX or the other features that the pull request adds.

References:

 

 

Continue ReadingDeployment terms in Open Event Frontend

How the Form Mixin Enhances Validations in Open Event Frontend

This blog article will illustrate how the various validations come together in  Open Event Frontend, in a standard format, strongly reducing the code redundancy in declaring validations. Open Event Frontend, offers high flexibility in terms of validation options, and all are stored in a convenient object notation format as follows:

getValidationRules() {
return {
  inline : true,
  delay  : false,
  on     : 'blur',
  fields : {
    identification: {
      identifier : 'email',
      rules      : [
        {
          type   : 'empty',
          prompt : this.l10n.t('Please enter your email ID')
        },
        {
          type   : 'email',
          prompt : this.l10n.t('Please enter a valid email ID')
        }
      ]
    },
    password: {
      identifier : 'password',
      rules      : [
        {
          type   : 'empty',
          prompt : this.l10n.t('Please enter your password')
        }
      ]
    }
  }
};
}

Thus the validations of a form are stored as objects, where the  identifier attribute determines which field to apply the validation conditions to. The rules array contains all the rules to be applied to the determined object. Within rules, the type represents the kind of validation, whereas the prompt attribute determines what message shall be displayed in case there is a violation of the validation rule.

 

NOTE: In case an identifier is not specified, then the name of the rule object is itself considered as the identifier.

These validations are in turn implemented by the FormMixin. The various relevant sections of the mixin will be discussed in detail. Please check references for complete source code of the mixin.

getForm() {
return this.get($form);
}

The getForm function returns the calling component’s entire form object on which the validations are to be applied.

onValid(callback) {
this.getForm().form('validate form');
if (this.getForm().form('is valid')) {
  callback();
}
}

The onValid function serves as the boundary level function, and is used to specify what should happen when the validation is successful. It expects a function object as i’s argument. It makes use of the getForm() function to retrieve the form and using semantic UI, determines if all the validations have been successful. In case they have, it calls the callback method passed down to it as the argument.

Next transitioning into the actual implementation, certain behavioral traits of the validations are controlled by the two global variables autoScrollToErrors and autoScrollSpeed. The former is a boolean, which is set to true if the application is designed in such a way that once the user presses the submit button and a validation fails, the browser scrolls to the first field whose validation failed with a speed specified by the latter.

Next, comes the heart of the mixin. Since it needs to be continuously determined if a field’s validation has failed, the entire logic of checking is placed inside a debounce call which is supported by ember to continuously execute a function with a gap of a certain specified time (400 ms in this case). Since the validity can be checked only after all the fields have rendered, the debounce call is placed inside a didRender call. The didRender call ensures that any logic inside it is executed only after all the elements of the relevant component have been rendered and are a part of the DOM.

didRender() {
. . .
debounce(this, () => {
  const defaultFormRules = {
    onFailure: formErrors => {
      if (this.autoScrollToErrors) {
        // Scroll to the first error message
        if (formErrors.length > 0) {
          $('html,body').animate({
            scrollTop: this.$(`div:contains('${formErrors[0]}')`).offset().top
          }, this.autoScrollSpeed);
        }
      }
    }
  };
}, 400); . . .}

 

onFailure is an ES6 syntax function, which takes all the form errors as its arguments. And in case the autoScrollToErrors global variable is set to true, it checks if there are any errors or the length of the formErrors array is more than 0. In case there are errors, it makes a call to animate callback, and scrolls to the very first field which has an error with the speed defined by the previously discussed global autoScrollSpeed . The fact that the very first field is selected for scrolling off to, is ensured by the index 0 specified (formErrors[0]). Thus role of the mixin is to continuously check for any validations and in case there are, scroll to them.

Resources

Continue ReadingHow the Form Mixin Enhances Validations in Open Event Frontend

Structure of Open Event Frontend

In Open Event Frontend, new contributors always fall into a dilemma of identifying the proper files where they have to make changes if they want to contribute. The project structure is quite complex and which is obvious because it is a large project. So, in this blog, we will walk through the structure of Open Event Frontend.

Following are the different folders of the project explained:

Root:
The root of the project contains folders like app, config, kubernetes, tests, scripts. Our main project is in the app folder where all the files are present. The config folder in the root has files related to the deployment of the app in development, production, etc. It also has the environment setup such as host, api keys, etc. Other files such as package.json, bower.json, etc are basically to store the current versions of the packages and to ease the installation of the project.

App:
The app folder has all the files and is mainly classified into the following folder:
adapters
components
controllers
helpers
Initializers
mixins
models
routes
serializers
services
styles
templates
transforms
utils

The folders with their significance are listed below:

Adapters: This folder contains the files for building URLs for our endpoints. Sometimes it happens to have a somewhat customised URL for an endpoint which we pass through adapter to modify it.
Components: This folder contains different components which we reuse in our app. For example, the image uploader component can be used at multiple places in our app, so we keep such elements in our components. This folder basically contains the js files of all the components(since when we generate a component, a js file and a hbs template is generated).
Controllers: This folder contains the controller associated with each route. Since the main principle of ember js is DDAU i.e data down actions up, all the actions are written in the files of this folder.
Helpers: Many a time it happens that, we want to format date, time, encode URL etc. There are some predefined helpers but sometimes custom helpers are also needed. All of them have been written in helpers folder.
Initializers: This folder has a file for now called ‘blanket.js’ which basically injects the services into our routes, components. So if you want to write any service and want to inject it into routes/components, it should go in here.
Mixins: In EmberJS the Mixin class can create objects whose properties and functions can be shared amongst other classes and instances. This allows for an easy way to share behavior between objects as well as design objects that may need multiple inheritance. All of them used for the application are in the mixins folder.
Models: This folder contains the schema’s for our data. Since we are using ember data, we need to have proper skeleton of the data. All of this goes it this folder. Observing this folder will show you some models like user, event, etc.
Routes: This folder contains the js files of the routes created. Routes handle which template to render and what to return from the model, etc.
Serializers: We use serializers to modify the data that ember sends automatically in a request. Consider we want to get a user with the help of user model, and don’t want to get the password attribute present in it. We can thus omit that by defining it in a serializer.
Services: Services are the ember objects which are available throughout the running time of the application. These are used to perform tasks like getting current user model, making third party API calls etc. All such services go in this folder.
Styles: As the name infers, all the style sheets go in here.
Templates: A template is generated with generation of each route and component. All of them go here. Thus, the markup will be written over here.
Transforms: Ember Data has a feature called transforms that allow you to transform values before they are set on a model or sent back to the server. In our case, we have a transform called moment.
Utils: This folder contains some functions exported as modules which are reusable. There is some JSON data as well.

References: Ember JS official guide: https://guides.emberjs.com/v2.17.0/
Blog posts: https://spin.atomicobject.com/2015/09/17/ember-js-clean/
http://www.programwitherik.com/ember-pods/

Continue ReadingStructure of Open Event Frontend

The Road to Success in Google Summer of Code 2017

It’s the best time when GCI students can get the overview experience of GSoC and all the aspiring participant can get themselves into different projects of FOSSASIA.

I’m a Junior year undergraduate student pursuing B.Tech in Electrical Engineering from Indian Institute of Technology Patna. This summer, I spent coding in Google Summer of Code with FOSSASIA organization. It feels great to be an open-source enthusiast, and Google as a sponsor make it as icing on the cake. People can learn new things here and meet new people.

I came to know about GSoC through my senior colleagues who got selected in GSoC in the year 2016. It was around September 2016 and I was in 2nd year of my college. At that time, last year, result of GSoC was declared.

What is GSoC?

Consider GSoC as a big bowl which has lots of small balls and those small balls are open-source organizations. Google basically acts as a sponsor for the open-source organizations. A timeline is proposed according to the applied organization and then student select their favorite organization and start to contribute to it. Believe me, it’s not only computer science branch specific, anyone can take part in it and there is no minimum CPI requirement. I consider myself to be one of the examples who have an electrical branch with not so good academic performance yet successfully being part of GSoC 2017.

How to select an organization?

This is the most important step and it takes time. I wandered around 100 organizations to find where my interest actually lies. But now, I’ll describe how to sort this and find your organization a little quicker. Take a pen and paper (kindly don’t use notepad of pc) and write down your field of interest in computer science. Number every point in decreasing order of your interest. Then for each respective field write down its basic pre-requisites. Visit GSoC website, go to organization tab and there is a slide for searching working field of the organization. Select only one organization, dig out its website, see the previous project and its application. If nothing fits you, repeat the same with another organization. And if that organization interests you, then look for a project of that organization. First of all, look at that application of the project, and give that application a try and must give a feedback to the organization. Then try to find that what languages, modules, etc that project used to work and how the project works. Don’t worry if nothing goes into your mind. Find out the developers mailing list, their chat channel, their code base area. And ask developers out there for help.

First Love It:

Open-Source, it’s a different world which exists on Earth. All organizations are open-source and all their codes are open and free to view. Find things that interests you the most and start to love the work. If you don’t understand a code, learn things by doing and asking. Most of the times we don’t get favorable responses, in such times we need to carry on and have patience for the best to happen.

My Favourite part:

GSoC has been my dream since the day I came to know about it. It’s only through this that one gets a chance to explore open-source softwares, and organizations get a chance to hire on board developers. This is the great initiative taken by Google which brings hope for the developers to increase the use of open-source. This is one of the ways through which one can look into the codes of the developers and help them out and even also get helped.

GSoC is the platform through which one can implement lots of new things, meet new people, develop new softwares and see the world around in a different way. That’s what happened with me, it’s just at the end of the first phase, my love towards open-source increased exponentially. Now I see every problem in my life as a way to solve it through the open-source. Rather it’s part of arranging an event or designing an invitation, I am encouraged to use open-source tools to help me out. It becomes very easy to distribute data and convey information through open-source, so the people can reach to you much easier.

You always see a thing according to your perspective and it’s always the best but the open-source gives it a view through the perspective of the world and gets the best from them through a compilation of all the sources. One can give ideas, their views, find something that other can’t even see and increase its karma through contribution. And all these things have been made possible through GOOGLE only. I became such that I can donate the rest of my life working for open-source. GSoC is responsible for including the open-source contribution in my daily life. It made me feel really bad if my Github profile page has 0 contributions at the end of the day. Open Source opens door to another world.

Challenging part:

To conclude, I would say that GSoC made me love the challenge. I became such that the things that come easily to me don’t taste good to me at all. Specifically, GSoC’s most challenging part is to get into it that is to get selected. I still can’t believe that I was selected. Now onwards it’s just fun and learning. Each and every day, I encountered several issues, bugs, etc but just before going to bed at night, there were things which collectively made me feel that whether the bug has been solved or not, but I was able to break the upper most covering of that conch shell. And such things increases the motivation and light up the enthusiasm to tackle the problem. Open-Source not only taught me to control different snapshots of software but also of time. I learn to manage different works of day efficiently and it includes the contribution in open-source as part of my daily life.

Advice to students:

The only problem new developers have is to get started. I’ll advise them to close their eyes and dive into it without thinking whether they would be able to complete this task or not. Believe me, you will gradually find that whether the task is completed or not but you are much above the condition than you were at the time of beginning the task.

Just learn by doing the things.

Make mistakes and enlist them as “things that will not work” so one may read it and avoid it.

GSoC Project link: https://summerofcode.withgoogle.com/projects/#5560333780385792
Final Code Submission: https://gist.github.com/meets2tarun/270f151d539298831ce542be5f733c82

Continue ReadingThe Road to Success in Google Summer of Code 2017

FOSSASIA Summit 2018: “The Open Conversational Web” with Open Source AI

FOSSASIA teams up with Science Centre Singapore and Lifelong Learning Institute for Asia’s premier open technology summit. The FOSSASIA OpenTechSummit is taking place from March 22-25, 2018 under the tagline “The Open Conversational Web” with a strong focus on Artificial Intelligence and Cloud for the Industry 4.0. More than 200 speakers will fly in to present at the event. International exhibitors will showcase their latest advancements and meet developers in a careers fair.

The FOSSASIA Open Tech Summit is an annual tech event featuring tech icons from around the world since 2009. The event is all about the latest and greatest open source technologies and their impact and applications on business and society. With more than 3,000 attendees the FOSSASIA Summit is the biggest gathering of Open Source developers and businesses in Asia. A great feature of 2018 is the expanded exhibition space where tech businesses, SMEs and startups converge with developers and customers and meet potential candidates in a careers fair.

“The goal of the FOSSASIA Summit is to bring together developers, technologists and businesses to collaborate, share and explore the full potential of open source to create opportunities for new industries. And, right now there is a shift happening where users increasingly communicate through their voice with computer applications enhanced by Open Source AI technologies.“, says Ms. Hong Phuc Dang, chair of the summit and continues: “We expect to see interesting new voice gadgets to try out at the event. And, attendees will be able to learn how to develop solutions for these new voice interface devices here.”

Associate Professor Lim Tit Meng, CE of Science Centre adds: “Technologies like Big Data, AI and VR, and the web itself are becoming more open and conversational. They are also the engines behind the Industry 4.0 innovations. The open source community, with its spirit of co-creation and sharing is at the forefront of conversations on the web. At Science Centre Singapore, we aim to showcase and create content using these technologies and look forward to learning from and working with the open source community.”

The call for speakers is open and “we are seeing a large increase in proposals this year” says Mr. Mario Behling from the FOSSASIA Summit committee. Speakers are expected from companies such as car manufacturer Daimler, tech companies like Google, Microsoft, Oracle, Samsung, Intel and from many Singapore startups with topics ranging from algorithms and cognitive experts to DevOps, cloud containers, Blockchain and Neurotechnologies. Voice assistants and Open Source development solutions for SUSI.AI, Amazon Alexa, Google Assistant, Microsoft Cortana, Siri, and solutions using Nuance or IBM Watson are a big topic.

Tickets are available on the website 2018.fossasia.org.

The press representatives signup is here.

Links

Continue ReadingFOSSASIA Summit 2018: “The Open Conversational Web” with Open Source AI