Toggling Voice On/Off in SUSI Chromebot

SUSI Chromebot has a lot of features that make it one of the best projects of FOSSASIA. Recently Voice/Speech was added to SUSI Chromebot. But there was no option that controlled the fact that whether speech output is needed or not. The latest addition to SUSI Chromebot is Toggling the Voice of SUSI On or Off. How was it achieved? Toggling Voice for SUSI required adding a button and a snippet of Javascript code to the main JS file. The code will take care of the fact whether the voice is to be toggled on or off. I started off by adding a button to the main HTML file. <a href=”javascript: void(0)” id=”speak” style=”color: white”><i class=”material-icons” id=”speak-icon”>volume_up</i></a> The above snippet of HTML code adds a voice button to the top bar of chromebot. Then there was the major part where the javascript code was to be added to add the functionality to the button. var shouldSpeak = true; I started off by creating a variable called as “shouldSpeak” which will determine whether or not SUSI should use the Chrome’s API to speak. Then I changed the “speakOut()” function and added another parameter to it. function speakOut(msg,speak=false){ if(speak){ var voiceMsg = new SpeechSynthesisUtterance(msg); window.speechSynthesis.speak(voiceMsg); } } The above code made sure that susi was only allowed to speak when and only “speak” variable was set to true. Then “eventListeners” were added to buttons and other things to link the functionality. document.getElementById(‘speak’).addEventListener(‘click’,changeSpeak); It adds the events of click to “speak” and associates it with the function “changeSpeak”. Now the function “changeSpeak” is created as follows. It toggles the on/off mechanism of voice in SUSI Chromebot. function changeSpeak(){ shouldSpeak = !shouldSpeak; var SpeakIcon = document.getElementById(‘speak-icon’); if(!shouldSpeak){ SpeakIcon.innerText = “volume_off”; } else{ SpeakIcon.innerText = “volume_up”; } console.log(‘Should be speaking? ’ + shouldSpeak); } Everytime the user clicks on the icon to toggle on/off voice the icon must also change and this functionality was taken care of by the above piece of code. Resources SUSI Chromebot Repository : https://github.com/fossasia/susi_chromebot Pull Request for the same : https://github.com/fossasia/susi_chromebot/pull/113 Issue for the same : https://github.com/fossasia/susi_chromebot/issues/110 Read about Speech Synthesis : https://developer.mozilla.org/en-US/docs/Web/API/SpeechSynthesis Learn How to add Event Listeners to Objects : https://www.w3schools.com/jsref/met_document_addeventlistener.asp What is “javascript : void(0)” : https://stackoverflow.com/questions/1291942/what-does-javascriptvoid0-mean Read about Material Icons : http://materializecss.com/icons.html  

Continue ReadingToggling Voice On/Off in SUSI Chromebot

Resolving Internal Error on Badgeyay

Badgeyay is in development stage and is frequently seen to encounter bugs. One such bug is the Internal Server Error in Badgeyay. What was the bug? The bug was with the badge generator’s backend code. The generator was trying to server the zip file that was not present. After going through the log I noticed that it was because a folder was missing from Badgeyay’s directory.   I immediately filed an issue #58 which stated the bug and how could it be resolved. After being assigned to the issue I did my work and created a Pull Request that was merged soon. The Pull Request can be found here. Resolving the bug With the help of extensive error management and proper code and log analysis I was able to figure out a fix for this bug. It was in-fact due to a missing folder that was deleted by a subsequent code during zipfile/pdf generation. It was supposed to be recreated every time it was deleted. I quickly designed a function that solved this error for future usage of Badgeyay.   How was it resolved? First I started by checking if the “BADGES_FOLDER” was not present. And if it was not present then the folder was created using the commands below   if not os.path.exists(BADGES_FOLDER):    os.mkdir(BADGES_FOLDER)   Then, I added docstring to the remaining part of the code. It was used to empty all the files and folder inside the “BADGES_FOLDER”. We could have to delete two things, a folder or a file. So proper instructions are added to handle file deletion and folder deletion.   for file in os.listdir(BADGES_FOLDER):    file_path = os.path.join(BADGES_FOLDER, file)    try:        if os.path.isfile(file_path):            os.unlink(file_path)        elif os.path.isdir(file_path):            shutil.rmtree(file_path)    except Exception:        traceback.print_exc()   Here “os.unlink” is a function that is used to delete a file. And “shutil.rmtree” is a function that deletes the whole folder at once. It is similar to “sudo rm -rf /directory”. Proper error handling is done as well to ensure stability of program as well. Challenges There were many problems that I had to face during this bug. It was my first time solving a bug, so I was nervous. I had no knowledge about “shutil” library. I was a new-comer. But I took these problems as challenges and was able to fix this bug that caused the INTERNAL SERVER ERROR : 500 . Resources BadgeYay Repository : https://github.com/fossasia/badgeyay Pull Request for the same : https://github.com/fossasia/badgeyay/pull/59 Issue for the same : https://github.com/fossasia/badgeyay/issues/58 Learn about OS Module : https://docs.python.org/2/library/os.html Learn about SHUTIL module : https://docs.python.org/2/library/shutil.html Read about Error Handling : https://docs.python.org/3/tutorial/errors.html Learn how to delete folder and file in Python : https://stackoverflow.com/questions/6996603/how-to-delete-a-file-or-folder    

Continue ReadingResolving Internal Error on Badgeyay

Comparison between SUSI AI with Mycroft AI and Amazon Alexa

Now is the era of Voice User Interface (VUI) devices and they play a very important role as personal assistants. Here we compare the SUSI AI, Mycroft AI and Amazon Alexa based on the number of skills, their availability, easiness to add and edit skills and the provision of the user to modify the skill and add more to it if needed, etc. Issue: https://github.com/fossasia/labs.fossasia.org/issues/215 The Comparison: Starting with the number of skills, here Amazon Alexa supports way more number of skills as compared to both Mycroft AI and SUSI AI. Availability: Mycroft AI and SUSI AI are available everywhere and can set up anywhere regardless of the country whereas Alexa is available in U.S., U.K., Germany,  India but they are aggressively expanding. Adding and editing skills: Mycroft and SUSI are open source and their skills can be added and edited and viewed by the open source community. Issues can be made to enhance the functionality of the skills whereas Alexa skills are not open source and certification and publishing of the skill is done by the Amazon team. Mycroft and SUSI skills can be customized by the user but this fails with Alexa as users have to create that same skill from scratch if they have to customize them. Platforms supported: Mycroft, SUSI and Alexa all support Linux. Mycroft lacks support for Windows and Mac but supports Raspberry Pi and Android, Alexa provides support for Windows and Mac and Raspberry Pi. SUSI also provides support for Android and iOS and can be integrated with speakers, vehicles, Pi, etc. Dedicated devices: As of now SUSI AI lacks such device. Mycroft has Mark 1 and Alexa has Echo. These devices are portable and are good candidates for home automation. Languages used for skill development: Mycroft mostly uses python. Alexa uses python, NodeJS, C#, etc for development of applications. SUSI uses its own language but language like javascript can be included in it. It’s easier to specify patterns using wildcards and variables in SUSI. Due to different languages used, Mycroft AI skills can’t be directly used in SUSI AI. We need to convert Mycroft skills to SUSI skills if Mycroft skills are to be used for SUSI. Some suggestions for making a dedicated device for SUSI: We can use a Raspberry Pi, USB headphones and a microphone to make a basic platform. We can install Jasper to enable the voice input on the Pi. Jasper is a open source application that enables us to make voice controlled applications. We can use SUSI server to interact with the device and the home appliances like lights. SUSI server can process the states of the the appliance (lights in this case) and return it as JSON objects to Raspberry Pi and then it may change the state as per user input. Make a simple Hello World skill with SUSI: Visit https://github.com/fossasia/susi_skill_cms/blob/master/docs/Skill_Tutorial.md for a basic introduction to SUSI skills syntax and how does it work. Go to http://dream.susi.ai . Enter the skill name, say “hello”. You…

Continue ReadingComparison between SUSI AI with Mycroft AI and Amazon Alexa

Announcing the FOSSASIA Codeheat Winners 2017/18

Today we are very proud to announce our Grand Prize Winners and Finalist Winners of Codeheat 2017/2018. Codeheat was epic in every regard. Participants not only solved a massive amount of issues in FOSSASIA's projects, reviewed pull requests, shared scrums, and wrote blog posts, but most importantly they encouraged and helped one another to learn and progress along the way. It was a very, very busy 5 months for everyone - we had 647 developers from 13 countries participating in the contest supported by 43 mentors. Thank you all for this amazing achievement! With so much excellent work getting done, it was a super hard to choose the Grand Prize and Finalist Winners of the contest. Our winners stand out in particular as they contributed to FOSSASIA projects on a continuously high level following our Free and Open Source Best Practices. They worked in different areas - code, reviews, blog posts and supported other community members. Each of the Grand Prize Winners is awarded a travel grant to join us at the FOSSASIA Summit in Singapore from March 22-25, 2018 where they receive the official Codeheat award, and meet with mentors and FOSSASIA developers. Other Finalist Winners will receive travel support vouchers to go to a Free and Open Source Software event of their choice. Active participants will also receive a certificate over the upcoming weeks. FOSSASIA mentors will meet many contributors and hand out prizes and Tshirts at our regular meetups and events across Asia. Congratulations to our Grand Prize Winners, Finalist Winners, and all of the participants who spent the last few of months learning, sharing and contributing to Free and Open Source Projects. Well-done! We are deeply impressed by your work, your progress and advancement. The winners are (in alphabetical order): Grand Prize Winners Manish Devgan Parth Shandilya Raghav Jajodia Finalist Winners Anshuman Verma Ayush Gupta Bhavesh Anand Mohit Sharma Nikit Bhandari Ritika Motwani Vaibhav Singh About Codeheat Codeheat is a contest that the FOSSASIA organization is honored to run every year. We saw immense growth this year in participants and the depth of contributions. Thank you Mentors and Supporters Our 40+ mentors and many project developers, the heart and soul of Codeheat, are the reason the contest thrives. Mentors volunteer their time to help participants become open source contributors. Mentors spend hundreds of hours during answering questions, reviewing submitted code, and welcoming the new developers to project. Codeheat would not be possible without their patience and tireless efforts. Learn more about this year's mentors on the Codeheat website. Certificate of Participation Participating developers, mentors and the FOSSASIA admin team learnt so much and it was an amazing and enriching experience and we believe the learnings are the main take-away of the program. We hope to see everyone continuing their contributions, sharing what they have learnt with others and to seize the opportunity to develop their code profile with FOSSASIA. We want to work together with the Open Tech community to improve people’s lives and create a…

Continue ReadingAnnouncing the FOSSASIA Codeheat Winners 2017/18

Installing Susper Search Engine and Deploying it to Heroku

Susper is a decentralized Search Engine that uses the peer to peer system yacy and Apache Solr to crawl and index search results. Search results are displayed using the Solr server which is embedded into YaCy. All search results must be provided by a YaCy search server which includes a Solr server with a specialized JSON result writer. When a search request is made in one of the search templates, a HTTP request is made to YaCy. The response is JSON because that can much better be parsed than XML in JavaScript. In this blog, we will talk about how to install Susper search engine locally and deploying it to Heroku (A cloud application platform). How to clone the repository Sign up / Login to GitHub and head over to the Susper repository. Then follow these steps. Go ahead and fork the repository https://github.com/fossasia/susper.com 2.   Get the clone of the forked version on your local machine using git clone https://github.com/<username>/susper.com.git 3. Add upstream to synchronize repository using git remote add upstream https://github.com/fossasia/susper.com.git Getting Started The Susper search application basically consists of the following : Angular-cli node --version >= 6 npm --version >= 3 First, we will need to install angular-cli by using the following command: npm install -g @angular/cli@latest 2. After installing angular-cli we need to install our required node modules, so we will do that by using the following command: npm install 3. Deploy locally by running this ng serve Go to localhost:4200 where the application will be running locally. How to Deploy Susper Search Engine to Heroku : We need to install Heroku on our machine. Type the following in your Linux terminal: wget -O- https://toolbelt.heroku.com/install-ubuntu.sh | sh This installs the Heroku Toolbelt on your machine to access Heroku from the command line. Create a Procfile inside root directory and write web: ng serve Next, we need to login to our Heroku server (assuming that you have already created an account). Type the following in the terminal: heroku login Enter your credentials and login. Once logged in we need to create a space on the Heroku server for our application. This is done with the following command heroku create Add nodejs buildpack to the app heroku buildpacks:add --index 1 heroku/nodejs Then we deploy the code to Heroku. git push heroku master git push heroku yourbranch:master # If you are in a different branch other than master Resources Documentation | Heroku Dev Center: https://devcenter.heroku.com/categories/reference

Continue ReadingInstalling Susper Search Engine and Deploying it to Heroku

Installing the Loklak Search and Deploying it to Surge

The Loklak search creates a website using the Loklak server as a data source. The goal is to get a search site, that offers timeline search as well as custom media search, account and geolocation search. In order to run the service, you can use the API of http://api.loklak.org or install your own Loklak server data storage engine. Loklak_server is a server application which collects messages from various social media tweet sources, including Twitter. The server contains a search index and a peer-to-peer index sharing interface. All messages are stored in an elasticsearch index. The site of this repo is deployed on the GitHub gh-pages branch and automatically deployed here: http://loklak.org In this blog, we will talk about how to install Loklak_Search locally and deploying it to Surge (Static web publishing for Front-End Developers). How to clone the repository Sign up / Login to GitHub and head over to the Loklak_Search repository. Then follow these steps. Go ahead and fork the repository https://github.com/fossasia/loklak_search   Get the clone of the forked version on your local machine using git clone https://github.com/<username>/loklak_search.git   Add upstream to synchronize repository using git remote add upstream https://github.com/fossasia/loklak_search.git Getting Started The Loklak search application basically consists of the following : Angular-cli node --version >= 6 npm --version >= 3 First, we will need to install angular-cli by using the following command: npm install -g @angular/cli@latest 2. After installing angular-cli we need to install our required node modules, so we will do that by using the following command: npm install 3. Deploy locally by running this ng serve Go to localhost:4200 where the application will be running locally. How to Deploy Loklak Search on Surge : Surge is the technology which publishes or generates the static web-page demo link, which makes it easier for the developer to deploy their web-app. There are a lot of benefits of using surge over generating demo link using GitHub pages. We need to install surge on our machine. Type the following in your Linux terminal: npm install --global surge This installs the Surge on your machine to access Surge from the command line. In your project directory just run surge After this, it will ask you three parameters, namely Email Password Domain After specifying all these three parameters, the deployment link with the respective domain is generated. Auto deployment of Pull Requests using Surge : To implement the feature of auto-deployment of pull request using surge, one can follow up these steps: Create a pr_deploy.sh file The pr_deploy.sh file will be executed only after success of Travis CI i.e. when Travis CI passes by using command bash pr_deploy.sh #!/usr/bin/env bash if [ "$TRAVIS_PULL_REQUEST" == "false" ]; then echo "Not a PR. Skipping surge deployment." exit 0 fi npm i -g surge export SURGE_LOGIN=test@example.co.in # Token of a dummy account export SURGE_TOKEN=d1c28a7a75967cc2b4c852cca0d12206 export DEPLOY_DOMAIN=https://pr-${TRAVIS_PULL_REQUEST}-fossasia-LoklakSearch.surge.sh surge --project ./dist --domain $DEPLOY_DOMAIN; Here, Travis CI is first installing surge locally by npm i -g surge  and then we are exporting the environment variables SURGE_LOGIN , SURGE_TOKEN and DEPLOY_DOMAIN.…

Continue ReadingInstalling the Loklak Search and Deploying it to Surge

How to organise a successful Google Code-In meetup

In this blog post I hope to write about what is Google Code-In and the best way to organise a successful Google Code-In meetup or workshop in your local community. I hope you will find everything that you need to know about conducting a successful meetup. What is Google Code-In ? Google Code-In is a global and an open source contest funded by Google to give real world software development experience to pre-university students who are in age range 13-17. Beside of software developing, this contest’s main objective is to motivate tech enthusiastic students to contribute to opensource and give them the knowledge about open source software development. The usual timeline of the contest is, it opens for students on end of the November and runs until mid of January. There are 25 open source organizations participating for Google Code-In this time. Your role ? As a GCI mentor , past GCI student or an open source contributor you have a responsibility towards the community. That is to expand the community awareness and transfer your knowledge to next generation. You gather experience while working on the open source projects and GCI is the best place to give your knowledge to youngsters while working with them. You should be devoted to guide students and give them an introduction to open source software development. How students can be a part of the contest ? Any pre-university student in age group 13-17 can register for the contest. The following four steps needs to be followed by the student to be eligible to compete in the contest. Sign up at g.co/gci after reading the Contest Rules. Ask their parent or legal guardian to sign the Parental Consent form. Find a task that interests them. Claim the task and start working while getting guidance from the mentors. In return to their hard work and open source contribution, students can win digital certificates, t-shirts, hoodies based on their performance as well as a trip to Google HeadQuarters for Grand Prize Winner. How to organize a local meetup ? Since the Google Code-In contest is for pre-university students, I highly recommend that you organize a meetup for schools in the community. You can easily contact the club or society of the school which is related to Information and Communication Technology and convey your idea of the meetup so that the responsible person can get the management approval from their side to facilitate your meetup inside the school. If you are not confident enough to conduct a session on your own maybe because this is a new experience to you, Don’t worry ! You can always call some other past GCI students, GCI mentors or open source contributors to collaborate with you in conducting a successful session. As open source world teaches us, it’s always collaboration that brings success to any project. Taking the start to the meetup, you need to give an introduction to the Google Code-In. You may find different questions from the audience about “What…

Continue ReadingHow to organise a successful Google Code-In meetup

The Road to Success in Google Summer of Code 2017

It's the best time when GCI students can get the overview experience of GSoC and all the aspiring participant can get themselves into different projects of FOSSASIA. I'm a Junior year undergraduate student pursuing B.Tech in Electrical Engineering from Indian Institute of Technology Patna. This summer, I spent coding in Google Summer of Code with FOSSASIA organization. It feels great to be an open-source enthusiast, and Google as a sponsor make it as icing on the cake. People can learn new things here and meet new people. I came to know about GSoC through my senior colleagues who got selected in GSoC in the year 2016. It was around September 2016 and I was in 2nd year of my college. At that time, last year, result of GSoC was declared. What is GSoC? Consider GSoC as a big bowl which has lots of small balls and those small balls are open-source organizations. Google basically acts as a sponsor for the open-source organizations. A timeline is proposed according to the applied organization and then student select their favorite organization and start to contribute to it. Believe me, it's not only computer science branch specific, anyone can take part in it and there is no minimum CPI requirement. I consider myself to be one of the examples who have an electrical branch with not so good academic performance yet successfully being part of GSoC 2017. How to select an organization? This is the most important step and it takes time. I wandered around 100 organizations to find where my interest actually lies. But now, I'll describe how to sort this and find your organization a little quicker. Take a pen and paper (kindly don't use notepad of pc) and write down your field of interest in computer science. Number every point in decreasing order of your interest. Then for each respective field write down its basic pre-requisites. Visit GSoC website, go to organization tab and there is a slide for searching working field of the organization. Select only one organization, dig out its website, see the previous project and its application. If nothing fits you, repeat the same with another organization. And if that organization interests you, then look for a project of that organization. First of all, look at that application of the project, and give that application a try and must give a feedback to the organization. Then try to find that what languages, modules, etc that project used to work and how the project works. Don't worry if nothing goes into your mind. Find out the developers mailing list, their chat channel, their code base area. And ask developers out there for help. First Love It: Open-Source, it's a different world which exists on Earth. All organizations are open-source and all their codes are open and free to view. Find things that interests you the most and start to love the work. If you don't understand a code, learn things by doing and asking. Most of the times we…

Continue ReadingThe Road to Success in Google Summer of Code 2017

Enhancing Rotation in Phimp.me using Horizontal Wheel View

Installation To implement rotation of an image in Phimp.me,  we have implemented Horizontal Wheel View feature. It is a custom view for user input that models horizontal wheel controller. How did we include this feature using jitpack.io? Step 1:  The jitpack.io repository has to be added to the root build.gradle: allprojects { repositories { jcenter() maven { url "https://jitpack.io" } } } Then, add the dependency to your module build.gradle: compile 'com.github.shchurov:horizontalwheelview:0.9.5' Sync the Gradle files to complete the installation. Step 2: Setting Up the Layout Horizontal Wheel View has to be added to the XML layout file as shown below: <FrameLayout android:layout_width="match_parent" android:layout_height="0dp" android:layout_weight="2"> <com.github.shchurov.horizontalwheelview.HorizontalWheelView android:id="@+id/horizontalWheelView" android:layout_width="match_parent" android:layout_height="match_parent" android:layout_toStartOf="@+id/rotate_apply" android:padding="5dp" app:activeColor="@color/accent_green" app:normalColor="@color/black" /> </FrameLayout> It has to be wrapped inside a Frame Layout to give weight to the view. To display the angle by which the image has been rotated, a simple text view has to be added just above it. <TextView android:id="@+id/tvAngle" android:layout_width="match_parent" android:layout_height="0dp" android:layout_gravity="center" android:layout_weight="1" android:gravity="center" android:textColor="@color/black" android:textSize="14sp" /> Step 3: Updating the UI First, declare and initialise objects of HorizontalWheelView and TextView. HorizontalWheelView horizontalWheelView = (HorizontalWheelView) findViewById(R.id.horizontalWheelView); TextView tvAngle= (TextView) findViewById(R.id.tvAngle);   Second, set up listener on the HorizontalWheelView and update the UI accordingly. horizontalWheelView.setListener(new HorizontalWheelView.Listener() { @Override public void onRotationChanged(double radians) { updateText(); updateImage(); } }); updateText() updates the angle and updateImage() updates the image to be rotated. The following functions have been defined below: private void updateText() { String text = String.format(Locale.US, "%.0f°", horizontalWheelView.getDegreesAngle()); tvAngle.setText(text); } private void updateImage() { int angle = (int) horizontalWheelView.getDegreesAngle(); //Code to rotate the image using the variable 'angle' rotatePanel.rotateImage(angle); } rotateImage() is a method of ‘rotatePanel’ which is an object of RotateImageView, a custom view to rotate the image. Let us have a look at some part of the code inside RotateImageView. private int rotateAngle; ‘rotateAngle’ is a global variable to hold the angle by which image has to be rotated. public void rotateImage(int angle) { rotateAngle = angle; this.invalidate(); } The method invalidate() is used to trigger UI refresh and every time UI is refreshed, the draw() method is called. We have to override the draw() method and write the main code to rotate the image in it. The draw() method is defined below: @Override public void draw(Canvas canvas) { super.draw(canvas); if (bitmap == null) return; maxRect.set(0, 0, getWidth(), getHeight());// The maximum bounding rectangle calculateWrapBox(); scale = 1; if (wrapRect.width() > getWidth()) { scale = getWidth() / wrapRect.width(); } canvas.save(); canvas.scale(scale, scale, canvas.getWidth() >> 1, canvas.getHeight() >> 1); canvas.drawRect(wrapRect, bottomPaint); canvas.rotate(rotateAngle, canvas.getWidth() >> 1, canvas.getHeight() >> 1); canvas.drawBitmap(bitmap, srcRect, dstRect, null); canvas.restore(); } private void calculateWrapBox() { wrapRect.set(dstRect); matrix.reset(); // Reset matrix is ​​a unit matrix int centerX = getWidth() >> 1; int centerY = getHeight() >> 1; matrix.postRotate(rotateAngle, centerX, centerY); // After the rotation angle matrix.mapRect(wrapRect); }   And here you go: Resources Refer to Github- Horizontal Wheel View for more functions and for a sample application.

Continue ReadingEnhancing Rotation in Phimp.me using Horizontal Wheel View
Read more about the article Setting up SUSI Desktop Locally for Development and Using Webview Tag and Adding Event Listeners
SUSI Desktop

Setting up SUSI Desktop Locally for Development and Using Webview Tag and Adding Event Listeners

SUSI Desktop is a cross platform desktop application based on electron which presently uses chat.susi.ai as a submodule and allows the users to interact with susi right from their desktop. Any electron app essentially comprises of the following components Main Process (Managing windows and other interactions with the operating system) Renderer Process (Manage the view inside the BrowserWindow) Steps to setup development environment Clone the repo locally. $ git clone https://github.com/fossasia/susi_desktop.git $ cd susi_desktop Install the dependencies listed in package.json file. $ npm install Start the app using the start script. $ npm start Structure of the project The project was restructured to ensure that the working environment of the Main and Renderer processes are separate which makes the codebase easier to read and debug, this is how the current project is structured. The root directory of the project contains another directory ‘app’ which contains our electron application. Then we have a package.json which contains the information about the project and the modules required for building the project and then there are other github helper files. Inside the app directory- Main - Files for managing the main process of the app Renderer - Files for managing the renderer process of the app Resources - Icons for the app and the tray/media files Webview Tag Display external web content in an isolated frame and process, this is used to load chat.susi.ai in a BrowserWindow as <webview src="https://chat.susi.ai/"></webview> Adding event listeners to the app Various electron APIs were used to give a native feel to the application. Send focus to the window WebContents on focussing the app window. win.on('focus', () => { win.webContents.send('focus'); }); Display the window only once the DOM has completely loaded. const page = mainWindow.webContents; ... page.on('dom-ready', () => { mainWindow.show(); }); Display the window on ‘ready-to-show’ event win.once('ready-to-show', () => { win.show(); }); Resources 1. A quick article to understand electron’s main and renderer process by Cameron Nokes at Medium link 2. Official documentation about the webview tag at https://electron.atom.io/docs/api/webview-tag/ 3. Read more about electron processes at https://electronjs.org/docs/glossary#process 4. SUSI Desktop repository at https://github.com/fossasia/susi_desktop.

Continue ReadingSetting up SUSI Desktop Locally for Development and Using Webview Tag and Adding Event Listeners