Toggling Voice On/Off in SUSI Chromebot

SUSI Chromebot has a lot of features that make it one of the best projects of FOSSASIA.

Recently Voice/Speech was added to SUSI Chromebot. But there was no option that controlled the fact that whether speech output is needed or not.

The latest addition to SUSI Chromebot is Toggling the Voice of SUSI On or Off.

How was it achieved?

Toggling Voice for SUSI required adding a button and a snippet of Javascript code to the main JS file. The code will take care of the fact whether the voice is to be toggled on or off.

I started off by adding a button to the main HTML file.

<a href=”javascript: void(0)” id=”speak” style=”color: white”><i class=”material-icons” id=”speak-icon”>volume_up</i></a>

The above snippet of HTML code adds a voice button to the top bar of chromebot.

Then there was the major part where the javascript code was to be added to add the functionality to the button.

var shouldSpeak = true;

I started off by creating a variable called as “shouldSpeak” which will determine whether or not SUSI should use the Chrome’s API to speak.

Then I changed the “speakOut()” function and added another parameter to it.

function speakOut(msg,speak=false){

if(speak){

var voiceMsg = new SpeechSynthesisUtterance(msg);

window.speechSynthesis.speak(voiceMsg);

}

}

The above code made sure that susi was only allowed to speak when and only “speak” variable was set to true.

Then “eventListeners” were added to buttons and other things to link the functionality.

document.getElementById(‘speak’).addEventListener(‘click’,changeSpeak);

It adds the events of click to “speak” and associates it with the function “changeSpeak”.

Now the function “changeSpeak” is created as follows. It toggles the on/off mechanism of voice in SUSI Chromebot.

function changeSpeak(){

shouldSpeak = !shouldSpeak;

var SpeakIcon = document.getElementById(‘speak-icon’);

if(!shouldSpeak){

SpeakIcon.innerText = “volume_off”;

}

else{

SpeakIcon.innerText = “volume_up”;

}

console.log(‘Should be speaking? ’ + shouldSpeak);

}

Everytime the user clicks on the icon to toggle on/off voice the icon must also change and this functionality was taken care of by the above piece of code.

Resources

 

Continue ReadingToggling Voice On/Off in SUSI Chromebot

Resolving Internal Error on Badgeyay

Badgeyay is in development stage and is frequently seen to encounter bugs. One such bug is the Internal Server Error in Badgeyay.

What was the bug?

The bug was with the badge generator’s backend code. The generator was trying to server the zip file that was not present. After going through the log I noticed that it was because a folder was missing from Badgeyay’s directory.

 

I immediately filed an issue #58 which stated the bug and how could it be resolved. After being assigned to the issue I did my work and created a Pull Request that was merged soon.

The Pull Request can be found here.

Resolving the bug

With the help of extensive error management and proper code and log analysis I was able to figure out a fix for this bug. It was in-fact due to a missing folder that was deleted by a subsequent code during zipfile/pdf generation. It was supposed to be recreated every time it was deleted. I quickly designed a function that solved this error for future usage of Badgeyay.

 

How was it resolved?

First I started by checking if the “BADGES_FOLDER” was not present. And if it was not present then the folder was created using the commands below

 

if not os.path.exists(BADGES_FOLDER):

    os.mkdir(BADGES_FOLDER)

 

Then, I added docstring to the remaining part of the code. It was used to empty all the files and folder inside the “BADGES_FOLDER”. We could have to delete two things, a folder or a file.

So proper instructions are added to handle file deletion and folder deletion.

 

for file in os.listdir(BADGES_FOLDER):

    file_path = os.path.join(BADGES_FOLDER, file)

    try:

        if os.path.isfile(file_path):

            os.unlink(file_path)

        elif os.path.isdir(file_path):

            shutil.rmtree(file_path)

    except Exception:

        traceback.print_exc()

 

Here “os.unlink” is a function that is used to delete a file. And “shutil.rmtree” is a function that deletes the whole folder at once. It is similar to “sudo rm -rf /directory”. Proper error handling is done as well to ensure stability of program as well.

Challenges

There were many problems that I had to face during this bug.

  • It was my first time solving a bug, so I was nervous.
  • I had no knowledge about “shutil” library.
  • I was a new-comer.

But I took these problems as challenges and was able to fix this bug that caused the INTERNAL SERVER ERROR : 500 .

Resources

 

 

Continue ReadingResolving Internal Error on Badgeyay

Comparison between SUSI AI with Mycroft AI and Amazon Alexa

Now is the era of Voice User Interface (VUI) devices and they play a very important role as personal assistants. Here we compare the SUSI AI, Mycroft AI and Amazon Alexa based on the number of skills, their availability, easiness to add and edit skills and the provision of the user to modify the skill and add more to it if needed, etc.

Issue: https://github.com/fossasia/labs.fossasia.org/issues/215

The Comparison:

  1. Starting with the number of skills, here Amazon Alexa supports way more number of skills as compared to both Mycroft AI and SUSI AI.
  2. Availability: Mycroft AI and SUSI AI are available everywhere and can set up anywhere regardless of the country whereas Alexa is available in U.S., U.K., Germany,  India but they are aggressively expanding.
  3. Adding and editing skills: Mycroft and SUSI are open source and their skills can be added and edited and viewed by the open source community. Issues can be made to enhance the functionality of the skills whereas Alexa skills are not open source and certification and publishing of the skill is done by the Amazon team. Mycroft and SUSI skills can be customized by the user but this fails with Alexa as users have to create that same skill from scratch if they have to customize them.
  4. Platforms supported: Mycroft, SUSI and Alexa all support Linux. Mycroft lacks support for Windows and Mac but supports Raspberry Pi and Android, Alexa provides support for Windows and Mac and Raspberry Pi. SUSI also provides support for Android and iOS and can be integrated with speakers, vehicles, Pi, etc.
  5. Dedicated devices: As of now SUSI AI lacks such device. Mycroft has Mark 1 and Alexa has Echo. These devices are portable and are good candidates for home automation.
  6. Languages used for skill development: Mycroft mostly uses python. Alexa uses python, NodeJS, C#, etc for development of applications. SUSI uses its own language but language like javascript can be included in it. It’s easier to specify patterns using wildcards and variables in SUSI.

Due to different languages used, Mycroft AI skills can’t be directly used in SUSI AI. We need to convert Mycroft skills to SUSI skills if Mycroft skills are to be used for SUSI.

Some suggestions for making a dedicated device for SUSI:

  1. We can use a Raspberry Pi, USB headphones and a microphone to make a basic platform.
  2. We can install Jasper to enable the voice input on the Pi. Jasper is a open source application that enables us to make voice controlled applications.
  3. We can use SUSI server to interact with the device and the home appliances like lights. SUSI server can process the states of the the appliance (lights in this case) and return it as JSON objects to Raspberry Pi and then it may change the state as per user input.

Make a simple Hello World skill with SUSI:

  1. Visit https://github.com/fossasia/susi_skill_cms/blob/master/docs/Skill_Tutorial.md for a basic introduction to SUSI skills syntax and how does it work.
  2. Go to http://dream.susi.ai .
  3. Enter the skill name, say “hello”.
  4. You will be greeted by a welcome message – “roses are red…..”. Delete it and replace it with the following snippet.
::name <Skill_name> #<— Enter skill name. for example hello

::author <author_name>

::author_url <author_url> #<— You can leave this empty as of now.

::description <description> #<— skill description

::dynamic_content No

::developer_privacy_policy <link> #<— you can leave this as of now.

::image <image_url> #<— You can leave this as of now.

::term_of_use <link>

#Intent. Comments are written with a #

hi|hello|what’s up #<— This is what the user says

Hi|I am good|Hello #<— This is what the skill answers

6. Now go to http://susi.ai/chat for the testing.

7. In the SUSI chat dialog box (present at the bottom of the page) enter dream <test application name> where “test application name” is the name you enter when you first visit http://dream.susi.ai. In this case “dream hello”.

8. You can input “what’s up” in the dialog box and it will give you the desired output which you mentioned in the application.

Conclusion:

SUSI has its own good points but it lacks in some department like the number and type of skills. Like Mycroft we can start making various skills and try to make a basic prototype of a dedicated SUSI personal assistant device.

Resources

  1. Jasper
  2. Skill addition to SUSI
  3. Mycroft hello world skill

 

Continue ReadingComparison between SUSI AI with Mycroft AI and Amazon Alexa

Announcing the FOSSASIA Codeheat Winners 2017/18

Today we are very proud to announce our Grand Prize Winners and Finalist Winners of Codeheat 2017/2018.

Codeheat was epic in every regard. Participants not only solved a massive amount of issues in FOSSASIA’s projects, reviewed pull requests, shared scrums, and wrote blog posts, but most importantly they encouraged and helped one another to learn and progress along the way. It was a very, very busy 5 months for everyone – we had 647 developers from 13 countries participating in the contest supported by 43 mentors. Thank you all for this amazing achievement!

With so much excellent work getting done, it was a super hard to choose the Grand Prize and Finalist Winners of the contest. Our winners stand out in particular as they contributed to FOSSASIA projects on a continuously high level following our Free and Open Source Best Practices. They worked in different areas – code, reviews, blog posts and supported other community members.

Each of the Grand Prize Winners is awarded a travel grant to join us at the FOSSASIA Summit in Singapore from March 22-25, 2018 where they receive the official Codeheat award, and meet with mentors and FOSSASIA developers. Other Finalist Winners will receive travel support vouchers to go to a Free and Open Source Software event of their choice. Active participants will also receive a certificate over the upcoming weeks. FOSSASIA mentors will meet many contributors and hand out prizes and Tshirts at our regular meetups and events across Asia.

Congratulations to our Grand Prize Winners, Finalist Winners, and all of the participants who spent the last few of months learning, sharing and contributing to Free and Open Source Projects. Well-done! We are deeply impressed by your work, your progress and advancement. The winners are (in alphabetical order):

Grand Prize Winners

Manish Devgan

Parth Shandilya

Raghav Jajodia

Finalist Winners

Anshuman Verma

Ayush Gupta

Bhavesh Anand

Mohit Sharma

Nikit Bhandari

Ritika Motwani

Vaibhav Singh

About Codeheat

Codeheat is a contest that the FOSSASIA organization is honored to run every year. We saw immense growth this year in participants and the depth of contributions.

Thank you Mentors and Supporters

Our 40+ mentors and many project developers, the heart and soul of Codeheat, are the reason the contest thrives. Mentors volunteer their time to help participants become open source contributors. Mentors spend hundreds of hours during answering questions, reviewing submitted code, and welcoming the new developers to project. Codeheat would not be possible without their patience and tireless efforts. Learn more about this year’s mentors on the Codeheat website.

Certificate of Participation

Participating developers, mentors and the FOSSASIA admin team learnt so much and it was an amazing and enriching experience and we believe the learnings are the main take-away of the program. We hope to see everyone continuing their contributions, sharing what they have learnt with others and to seize the opportunity to develop their code profile with FOSSASIA. We want to work together with the Open Tech community to improve people’s lives and create a better world for all. As a participating developer or mentor, you will receive your certificate over the upcoming weeks. Thank you!

More Links

Continue ReadingAnnouncing the FOSSASIA Codeheat Winners 2017/18

Installing Susper Search Engine and Deploying it to Heroku

Susper is a decentralized Search Engine that uses the peer to peer system yacy and Apache Solr to crawl and index search results.

Search results are displayed using the Solr server which is embedded into YaCy. All search results must be provided by a YaCy search server which includes a Solr server with a specialized JSON result writer. When a search request is made in one of the search templates, a HTTP request is made to YaCy. The response is JSON because that can much better be parsed than XML in JavaScript.

In this blog, we will talk about how to install Susper search engine locally and deploying it to Heroku (A cloud application platform).

How to clone the repository

Sign up / Login to GitHub and head over to the Susper repository. Then follow these steps.

  1. Go ahead and fork the repository
https://github.com/fossasia/susper.com

2.   Get the clone of the forked version on your local machine using

git clone https://github.com/<username>/susper.com.git

3. Add upstream to synchronize repository using

git remote add upstream https://github.com/fossasia/susper.com.git

Getting Started

The Susper search application basically consists of the following :

  1. First, we will need to install angular-cli by using the following command:
npm install -g @angular/cli@latest

2. After installing angular-cli we need to install our required node modules, so we will do that by using the following command:

npm install

3. Deploy locally by running this

ng serve

Go to localhost:4200 where the application will be running locally.

How to Deploy Susper Search Engine to Heroku :

  1. We need to install Heroku on our machine. Type the following in your Linux terminal:
wget -O- https://toolbelt.heroku.com/install-ubuntu.sh | sh

This installs the Heroku Toolbelt on your machine to access Heroku from the command line.

  1. Create a Procfile inside root directory and write
web: ng serve
  1. Next, we need to login to our Heroku server (assuming that you have already created an account).

Type the following in the terminal:

heroku login

Enter your credentials and login.

  1. Once logged in we need to create a space on the Heroku server for our application. This is done with the following command
heroku create
  1. Add nodejs buildpack to the app
heroku buildpacks:add –index 1 heroku/nodejs
  1. Then we deploy the code to Heroku.
git push heroku master
git push heroku yourbranch:master # If you are in a different branch other than master

Resources

Continue ReadingInstalling Susper Search Engine and Deploying it to Heroku

Installing the Loklak Search and Deploying it to Surge

The Loklak search creates a website using the Loklak server as a data source. The goal is to get a search site, that offers timeline search as well as custom media search, account and geolocation search.

In order to run the service, you can use the API of http://api.loklak.org or install your own Loklak server data storage engine. Loklak_server is a server application which collects messages from various social media tweet sources, including Twitter. The server contains a search index and a peer-to-peer index sharing interface. All messages are stored in an elasticsearch index.

The site of this repo is deployed on the GitHub gh-pages branch and automatically deployed here: http://loklak.org

In this blog, we will talk about how to install Loklak_Search locally and deploying it to Surge (Static web publishing for Front-End Developers).

How to clone the repository

Sign up / Login to GitHub and head over to the Loklak_Search repository. Then follow these steps.

  1. Go ahead and fork the repository
https://github.com/fossasia/loklak_search
  1.   Get the clone of the forked version on your local machine using
git clone https://github.com/<username>/loklak_search.git
  1.   Add upstream to synchronize repository using
git remote add upstream https://github.com/fossasia/loklak_search.git

Getting Started

The Loklak search application basically consists of the following :

  1. First, we will need to install angular-cli by using the following command:
npm install -g @angular/cli@latest

2. After installing angular-cli we need to install our required node modules, so we will do that by using the following command:

npm install

3. Deploy locally by running this

ng serve

Go to localhost:4200 where the application will be running locally.

How to Deploy Loklak Search on Surge :

Surge is the technology which publishes or generates the static web-page demo link, which makes it easier for the developer to deploy their web-app. There are a lot of benefits of using surge over generating demo link using GitHub pages.

  1. We need to install surge on our machine. Type the following in your Linux terminal:
npm install –global surge

This installs the Surge on your machine to access Surge from the command line.

  1. In your project directory just run
surge
  1. After this, it will ask you three parameters, namely
Email
Password
Domain

After specifying all these three parameters, the deployment link with the respective domain is generated.

Auto deployment of Pull Requests using Surge :

To implement the feature of auto-deployment of pull request using surge, one can follow up these steps:

  • Create a pr_deploy.sh file
  • The pr_deploy.sh file will be executed only after success of Travis CI i.e. when Travis CI passes by using command bash pr_deploy.sh
#!/usr/bin/env bash
if [ “$TRAVIS_PULL_REQUEST” == “false” ]; then
echo “Not a PR. Skipping surge deployment.”
exit 0
fi
npm i -g surge
export SURGE_LOGIN=test@example.co.in
# Token of a dummy account
export SURGE_TOKEN=d1c28a7a75967cc2b4c852cca0d12206
export DEPLOY_DOMAIN=https://pr-${TRAVIS_PULL_REQUEST}-fossasia-LoklakSearch.surge.sh
surge –project ./dist –domain $DEPLOY_DOMAIN;

Here, Travis CI is first installing surge locally by npm i -g surge  and then we are exporting the environment variables SURGE_LOGIN , SURGE_TOKEN and DEPLOY_DOMAIN.

Now, execute pr_deploy.sh file from .travis.yml by using command bash pr_deploy.sh

Resources

Continue ReadingInstalling the Loklak Search and Deploying it to Surge

How to organise a successful Google Code-In meetup

In this blog post I hope to write about what is Google Code-In and the best way to organise a successful Google Code-In meetup or workshop in your local community. I hope you will find everything that you need to know about conducting a successful meetup.

What is Google Code-In ?

Google Code-In is a global and an open source contest funded by Google to give real world software development experience to pre-university students who are in age range 13-17. Beside of software developing, this contest’s main objective is to motivate tech enthusiastic students to contribute to opensource and give them the knowledge about open source software development.

The usual timeline of the contest is, it opens for students on end of the November and runs until mid of January. There are 25 open source organizations participating for Google Code-In this time.

Your role ?

As a GCI mentor , past GCI student or an open source contributor you have a responsibility towards the community. That is to expand the community awareness and transfer your knowledge to next generation. You gather experience while working on the open source projects and GCI is the best place to give your knowledge to youngsters while working with them. You should be devoted to guide students and give them an introduction to open source software development.

How students can be a part of the contest ?

Any pre-university student in age group 13-17 can register for the contest. The following four steps needs to be followed by the student to be eligible to compete in the contest.

  1. Sign up at g.co/gci after reading the Contest Rules.
  2. Ask their parent or legal guardian to sign the Parental Consent form.
  3. Find a task that interests them.
  4. Claim the task and start working while getting guidance from the mentors.

In return to their hard work and open source contribution, students can win digital certificates, t-shirts, hoodies based on their performance as well as a trip to Google HeadQuarters for Grand Prize Winner.

How to organize a local meetup ?

Since the Google Code-In contest is for pre-university students, I highly recommend that you organize a meetup for schools in the community. You can easily contact the club or society of the school which is related to Information and Communication Technology and convey your idea of the meetup so that the responsible person can get the management approval from their side to facilitate your meetup inside the school.

If you are not confident enough to conduct a session on your own maybe because this is a new experience to you, Don’t worry ! You can always call some other past GCI students, GCI mentors or open source contributors to collaborate with you in conducting a successful session. As open source world teaches us, it’s always collaboration that brings success to any project.

Taking the start to the meetup, you need to give an introduction to the Google Code-In. You may find different questions from the audience about “What is GCI?”. It is better if you can emphasize the importance of contributing to the open source projects since the students have no experience in that field. I suggest you to give students an insight on the evolvement of Google Code-In throughout the past years, so they get to know the real world statistics.

During the meetup, you need to focus on the 05 types of tasks that are available for students to claim, giving insights to what are the small small things that they really need to use in each task type.

  1. Coding
    • Give insights into GitHub and how to make a GitHub account, how to clone a project repository to their local machines and how to make a pull request.
  2. Documents and Training
    • Give insights into standard ways of doing documentation and basics to follow when conducting a user training.
  3. Outreach and Research
    • Give insights into how to make a blog account and write a blog as well as how to do some research on the project areas.
  4. Quality Assurance
    • Give insights into the measures that we take to assure the quality of the project and the steps that we take in order to make sure the project is adhered to the relevant quality measures.
  5. User Interface
    • Give insights into basic wireframing software like Balsamic as well as guidelines to a successful user experience.

It is really appreciated if you can share your experiences in open source contributions with them like what did you do, what you will be doing next and what obstacles that you had to face while contributing and how did you overcome those challenges. This will be an eye opener for them to think beyond the comfort zone. This section will be really helpful for the students to grab really what open source contributing is.

It is a best practice to conduct the session in an interactive way getting things done out of the box so that the students won’t get bored and they feel more energetic and comfortable since they feel that their opinion is also valued when we give time for their voice as well. Always motivate them to ask questions in the moments that they need more clarifications about what you are saying. In return if you have swags from Google, give them too since they will love it.

Always try to localize the session according to the audience that you are talking to. Use the language the majority of the audience is feasible with in order to make the meetup content more understandable to the community. You can use some slides so that you won’t miss the sections that you are going to talk about and the presentation flow will be really smooth to the initiative. Try to take an offline slide set with you in a USB drive, if you are making your presentation on Google Slides. Same for any videos that you are going to show up too.

Don’t forget to bring necessary cables/ converters(projector converters) with you and always remember to have a good internet connection with you if you are using internet for demos or other things  to eliminate connectivity issues which interrupt the meetup at some points and it is not a good impression to the students.

So far I wrote about how to organize a successful meetup in your local community on Google Code-In and hope this information will be very useful for you which I gathered through my own experiences when conducting the local meetups. I’m waiting to see some new meetups coming soon from all of you guys. Good Luck !

 

References

Continue ReadingHow to organise a successful Google Code-In meetup

The Road to Success in Google Summer of Code 2017

It’s the best time when GCI students can get the overview experience of GSoC and all the aspiring participant can get themselves into different projects of FOSSASIA.

I’m a Junior year undergraduate student pursuing B.Tech in Electrical Engineering from Indian Institute of Technology Patna. This summer, I spent coding in Google Summer of Code with FOSSASIA organization. It feels great to be an open-source enthusiast, and Google as a sponsor make it as icing on the cake. People can learn new things here and meet new people.

I came to know about GSoC through my senior colleagues who got selected in GSoC in the year 2016. It was around September 2016 and I was in 2nd year of my college. At that time, last year, result of GSoC was declared.

What is GSoC?

Consider GSoC as a big bowl which has lots of small balls and those small balls are open-source organizations. Google basically acts as a sponsor for the open-source organizations. A timeline is proposed according to the applied organization and then student select their favorite organization and start to contribute to it. Believe me, it’s not only computer science branch specific, anyone can take part in it and there is no minimum CPI requirement. I consider myself to be one of the examples who have an electrical branch with not so good academic performance yet successfully being part of GSoC 2017.

How to select an organization?

This is the most important step and it takes time. I wandered around 100 organizations to find where my interest actually lies. But now, I’ll describe how to sort this and find your organization a little quicker. Take a pen and paper (kindly don’t use notepad of pc) and write down your field of interest in computer science. Number every point in decreasing order of your interest. Then for each respective field write down its basic pre-requisites. Visit GSoC website, go to organization tab and there is a slide for searching working field of the organization. Select only one organization, dig out its website, see the previous project and its application. If nothing fits you, repeat the same with another organization. And if that organization interests you, then look for a project of that organization. First of all, look at that application of the project, and give that application a try and must give a feedback to the organization. Then try to find that what languages, modules, etc that project used to work and how the project works. Don’t worry if nothing goes into your mind. Find out the developers mailing list, their chat channel, their code base area. And ask developers out there for help.

First Love It:

Open-Source, it’s a different world which exists on Earth. All organizations are open-source and all their codes are open and free to view. Find things that interests you the most and start to love the work. If you don’t understand a code, learn things by doing and asking. Most of the times we don’t get favorable responses, in such times we need to carry on and have patience for the best to happen.

My Favourite part:

GSoC has been my dream since the day I came to know about it. It’s only through this that one gets a chance to explore open-source softwares, and organizations get a chance to hire on board developers. This is the great initiative taken by Google which brings hope for the developers to increase the use of open-source. This is one of the ways through which one can look into the codes of the developers and help them out and even also get helped.

GSoC is the platform through which one can implement lots of new things, meet new people, develop new softwares and see the world around in a different way. That’s what happened with me, it’s just at the end of the first phase, my love towards open-source increased exponentially. Now I see every problem in my life as a way to solve it through the open-source. Rather it’s part of arranging an event or designing an invitation, I am encouraged to use open-source tools to help me out. It becomes very easy to distribute data and convey information through open-source, so the people can reach to you much easier.

You always see a thing according to your perspective and it’s always the best but the open-source gives it a view through the perspective of the world and gets the best from them through a compilation of all the sources. One can give ideas, their views, find something that other can’t even see and increase its karma through contribution. And all these things have been made possible through GOOGLE only. I became such that I can donate the rest of my life working for open-source. GSoC is responsible for including the open-source contribution in my daily life. It made me feel really bad if my Github profile page has 0 contributions at the end of the day. Open Source opens door to another world.

Challenging part:

To conclude, I would say that GSoC made me love the challenge. I became such that the things that come easily to me don’t taste good to me at all. Specifically, GSoC’s most challenging part is to get into it that is to get selected. I still can’t believe that I was selected. Now onwards it’s just fun and learning. Each and every day, I encountered several issues, bugs, etc but just before going to bed at night, there were things which collectively made me feel that whether the bug has been solved or not, but I was able to break the upper most covering of that conch shell. And such things increases the motivation and light up the enthusiasm to tackle the problem. Open-Source not only taught me to control different snapshots of software but also of time. I learn to manage different works of day efficiently and it includes the contribution in open-source as part of my daily life.

Advice to students:

The only problem new developers have is to get started. I’ll advise them to close their eyes and dive into it without thinking whether they would be able to complete this task or not. Believe me, you will gradually find that whether the task is completed or not but you are much above the condition than you were at the time of beginning the task.

Just learn by doing the things.

Make mistakes and enlist them as “things that will not work” so one may read it and avoid it.

GSoC Project link: https://summerofcode.withgoogle.com/projects/#5560333780385792
Final Code Submission: https://gist.github.com/meets2tarun/270f151d539298831ce542be5f733c82

Continue ReadingThe Road to Success in Google Summer of Code 2017

Enhancing Rotation in Phimp.me using Horizontal Wheel View

Installation

To implement rotation of an image in Phimp.me,  we have implemented Horizontal Wheel View feature. It is a custom view for user input that models horizontal wheel controller. How did we include this feature using jitpack.io?

Step 1: 

The jitpack.io repository has to be added to the root build.gradle:

allprojects {
repositories {
jcenter()
maven { url "https://jitpack.io" }
}
}


Then, add the dependency to your module build.gradle:

compile 'com.github.shchurov:horizontalwheelview:0.9.5'


Sync the Gradle files to complete the installation.

Step 2: Setting Up the Layout

Horizontal Wheel View has to be added to the XML layout file as shown below:

<FrameLayout
android:layout_width="match_parent"
android:layout_height="0dp"
android:layout_weight="2">

<com.github.shchurov.horizontalwheelview.HorizontalWheelView
android:id="@+id/horizontalWheelView"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:layout_toStartOf="@+id/rotate_apply"
android:padding="5dp"
app:activeColor="@color/accent_green"
app:normalColor="@color/black" />

</FrameLayout>


It has to be wrapped inside a Frame Layout to give weight to the view.
To display the angle by which the image has been rotated, a simple text view has to be added just above it.

<TextView
android:id="@+id/tvAngle"
android:layout_width="match_parent"
android:layout_height="0dp"
android:layout_gravity="center"
android:layout_weight="1"
android:gravity="center"
android:textColor="@color/black"
android:textSize="14sp" />

Step 3: Updating the UI

First, declare and initialise objects of HorizontalWheelView and TextView.

HorizontalWheelView horizontalWheelView = (HorizontalWheelView) findViewById(R.id.horizontalWheelView);
TextView tvAngle= (TextView) findViewById(R.id.tvAngle);

 

Second, set up listener on the HorizontalWheelView and update the UI accordingly.

horizontalWheelView.setListener(new HorizontalWheelView.Listener() {
@Override
public void onRotationChanged(double radians) {
updateText();
updateImage();
}
});


updateText()
updates the angle and updateImage() updates the image to be rotated. The following functions have been defined below:

private void updateText() {
String text = String.format(Locale.US, "%.0f°", horizontalWheelView.getDegreesAngle());
tvAngle.setText(text);
}

private void updateImage() {
int angle = (int) horizontalWheelView.getDegreesAngle();
//Code to rotate the image using the variable 'angle'
rotatePanel.rotateImage(angle);
}


rotateImage()
is a method of ‘rotatePanel’ which is an object of RotateImageView, a custom view to rotate the image.

Let us have a look at some part of the code inside RotateImageView.

private int rotateAngle;


‘rotateAngle’ is a global variable to hold the angle by which image has to be rotated.

public void rotateImage(int angle) {
rotateAngle = angle;
this.invalidate();
}


The method invalidate() is used to trigger UI refresh and every time UI is refreshed, the draw() method is called.
We have to override the draw() method and write the main code to rotate the image in it.

The draw() method is defined below:

@Override
public void draw(Canvas canvas) {
super.draw(canvas);
if (bitmap == null)
return;
maxRect.set(0, 0, getWidth(), getHeight());// The maximum bounding rectangle

calculateWrapBox();
scale = 1;
if (wrapRect.width() > getWidth()) {
scale = getWidth() / wrapRect.width();
}

canvas.save();
canvas.scale(scale, scale, canvas.getWidth() >> 1,
canvas.getHeight() >> 1);
canvas.drawRect(wrapRect, bottomPaint);
canvas.rotate(rotateAngle, canvas.getWidth() >> 1,
canvas.getHeight() >> 1);
canvas.drawBitmap(bitmap, srcRect, dstRect, null);
canvas.restore();
}

private void calculateWrapBox() {
wrapRect.set(dstRect);
matrix.reset(); // Reset matrix is ​​a unit matrix
int centerX = getWidth() >> 1;
int centerY = getHeight() >> 1;
matrix.postRotate(rotateAngle, centerX, centerY); // After the rotation angle
matrix.mapRect(wrapRect);
}

 

And here you go:

Resources

Refer to Github- Horizontal Wheel View for more functions and for a sample application.

Continue ReadingEnhancing Rotation in Phimp.me using Horizontal Wheel View
Read more about the article Setting up SUSI Desktop Locally for Development and Using Webview Tag and Adding Event Listeners
SUSI Desktop

Setting up SUSI Desktop Locally for Development and Using Webview Tag and Adding Event Listeners

SUSI Desktop is a cross platform desktop application based on electron which presently uses chat.susi.ai as a submodule and allows the users to interact with susi right from their desktop.

Any electron app essentially comprises of the following components

    • Main Process (Managing windows and other interactions with the operating system)
    • Renderer Process (Manage the view inside the BrowserWindow)

Steps to setup development environment

      • Clone the repo locally.
$ git clone https://github.com/fossasia/susi_desktop.git
$ cd susi_desktop
      • Install the dependencies listed in package.json file.
$ npm install
      • Start the app using the start script.
$ npm start

Structure of the project

The project was restructured to ensure that the working environment of the Main and Renderer processes are separate which makes the codebase easier to read and debug, this is how the current project is structured.

The root directory of the project contains another directory ‘app’ which contains our electron application. Then we have a package.json which contains the information about the project and the modules required for building the project and then there are other github helper files.

Inside the app directory-

  • Main – Files for managing the main process of the app
  • Renderer – Files for managing the renderer process of the app
  • Resources – Icons for the app and the tray/media files
  • Webview Tag

    Display external web content in an isolated frame and process, this is used to load chat.susi.ai in a BrowserWindow as

    <webview src="https://chat.susi.ai/"></webview>
    

    Adding event listeners to the app

    Various electron APIs were used to give a native feel to the application.

  • Send focus to the window WebContents on focussing the app window.
  • win.on('focus', () => {
    	win.webContents.send('focus');
    });
    
  • Display the window only once the DOM has completely loaded.
  • const page = mainWindow.webContents;
    ...
    page.on('dom-ready', () => {
    	mainWindow.show();
    });
    
  • Display the window on ‘ready-to-show’ event
  • win.once('ready-to-show', () => {
    	win.show();
    });
    

    Resources

    1. A quick article to understand electron’s main and renderer process by Cameron Nokes at Medium link
    2. Official documentation about the webview tag at https://electron.atom.io/docs/api/webview-tag/
    3. Read more about electron processes at https://electronjs.org/docs/glossary#process
    4. SUSI Desktop repository at https://github.com/fossasia/susi_desktop.

    Continue ReadingSetting up SUSI Desktop Locally for Development and Using Webview Tag and Adding Event Listeners