Friday, December 28, 2018

Story Behind Mock Interview LK



I participated in my first ever interview in 2009 as a candidate for a software engineering internship. Before that, there were a couple of awareness programs arranged by the university to make students ready for the interviews but I was not able to put much attention on it so I faced the interview with lack of preparation. During the interview, I realized that I have not taken time to cover the basics for an interview. My self-introduction was horrible and it didn’t contain any of the important points to put me in a better conversation. I was not prepared with the information of the projects and technologies I have included in the CV, so my answers were mostly as “I’m not sure ....” or “as I can remember ...” which ruined my confidence. Even I have not refreshed the knowledge on the basics such as OOP concepts, Databases or Algorithms which made me skip some of the questions asked. I got rejected from my first interview and then the second and then the next.

After getting rejected from 3 consecutive internship interviews I was desperate and only very few students from my batch were left like me. The worst thing happened to me was that I wasn’t able to get myself corrected after the first interview and I was repeating the same mistake. In the end, the head of my engineering department had to directly involve to find me an internship opportunity.

After the internship, I joined back the university to complete my fourth year and at the end of it there was a career fair organized by the university. As same as the last time, there were a series of programs to prepare the students for their interviews and this time I took the best out it. There was a new program called Mock Interviews and I got very valuable constructive feedback during the first mock interview and then I requested another one to test myself further. With all that I was well prepared for the job interviews and was able to get a job offer sooner than many of my colleagues. I realized that “The Mock Interview” helped me a lot to get prepared for my real interview. Later years I have contributed to conduct a number of mock interviews for students and all those students have found that it was helpful for them as well and one of the students personally talked to me to appreciate the feedback I provided after getting his first job offer at a renowned company in Sri Lanka.

During my 7 years of work life, I have interviewed a considerable amount of candidates and I see my story in many of them. Whenever I see such an unprepared candidate I get to remember how I suffered in my interviews for the internship. At the same time, it strikes me how much of a difference could have been made with some simple guidance for them. Mock Interview LK is a result of that feeling.

With the help of a few of my friends who have years of experience in the software industry, we started a free service to provide mock interviews for the candidates who are in need. Candidates can make a request to us by filling the Google Form pinned in our FB page or else they can send us a quick request via Email, Messenger or Twitter. The requests will be served with an online mock interview at a convenient time for both parties. At the end of the mock interview, there will be a feedback session which will consist of feedback on the CV as well as how did the candidate perform during the mock interview.

We do have resource people to provide mock interviews in the software industry but based on the requests we’ll try to get more resource people from other industries. Please spread this word to candidates who are in need so we can help them to face their next interview more confidently.

Mock Interview LK is there to make everyone ready for their next interview. It's all for free, do not miss the opportunity.

Saturday, July 23, 2016

You don't know Javascript (Scope and Closure)

Javascript is one of the 3 main core technologies in world wide web. Since it's origin 2 decades back, it has evolved a lot and as per today, it is the trendiest programming language on earth.

Since 2009 I have used javascript to do web development and until recently I used it with basic dom manipulations with jquery, simple UI logic with vanilla JS and with some frameworks as AnuglarJS. The basic knowledge I had about JS was sufficient enough to get above things done. But recently when I started doing some serious development with React and GraphQL, I realized that my knowledge on JS is pretty primitive and I'm not really applying any core JS concepts or patterns in my code. At the same time, while I was listing to a podcast in Software Engineering Daily I found Kyle Simpson was explaining about series of his books called "You Don't know Javascript". With that, I got the interest in reading the book to sharpen my saw in JS. It's a pretty interesting book and it covers many basic concepts in JS. Following series of articles will cover what I learned in each of 
these books.

Scope


The scope is one of the core concepts in Javascript and it's a must to have a thorough understanding of the scope to write better JS code. Let's dig in further to scope.

What is scope?

The scope is set of well-defined rules on how to store and retrieve variables in a program. In one way or other, you are always dealing with these rules when writing and execute code in JS. 


In JS assignment for the scope can be performed in two ways.

1) by using = expression

a = 3;

2) or by passing an argument to a function parameter.

function foo(x) {
}

foo(3);

Lexical Scope

In JS, the scope is defined by Lexical Scope. Lexical Scope is defining the scope at author time based on where the functions are placed.

But there are two other ways to overwrite the lexical scope in JS. Those are JS core functions which will manipulate the scope at runtime. Those are,

1) eval() function
Eval function takes a string as an input and executes the content in the scope of the code execution.

2) with() function
Takes an object as the input and consider the object as a new separate lexical scope.


function foo(obj) {
  with(obj){
    console.log(x);
  }
}
foo({x:5}); // will print "5"

Cheating lexical scope makes the code performance intensive so better to avoid it.


Nested Scopes

In JS we can define different levels of scopes. The most outer scope is global scope and new scopes will be defined as nested scopes when we are creating functions or blocks.

var a = 5;

function foo(x) {
  var b = 10;
}
In above example "a" is in global scope and "b" is in a new function scope.

When Javascript engine finds a variable in an inner scope engine starts searching for it in the given scope and if it isn't there, it checks the variable in the immediate outer scope and likewise it traverse until it finds the variable and returns it. Once it reaches the most outer scope, it automatically stops traversing.

There is a really tricky part in how the fall back happens when the engine cannot find a defined variable in any given scope. If a variable is not found in any parent scope and if it's for an assignment purpose, then a new variable will be created in the global scope. You should be really careful on this.

var a = 5;

function foo(x) {
  b = 10;
}

When Javascript engine finds a variable in an inner scope engine starts searching for it in the given scope and if it isn't there, it checks the variable in the immediate outer scope and likewise it traverse until it finds the variable and returns it. Once it reaches the most outer scope, it automatically stops traversing.

Note: Global variables are automatically being part of the global object. In browser, it's the window object. So you can create global variables as "window.a". Using this technique you can access the variables in the global scope without getting affected from shadowing. Ex:

var a = 5;

function foo() {
  var a = 10;
  console.log(a); // will print "10"
  console.log(window.a); // will print "5"
}

foo()

This will print two lines as follows.

10
5

There is another interesting point demonstrated in the above example. It's shadowing.

Shadowing

Overriding the outer scope variables in inner scopes. Shadowing is commonly used in JS which allows developers to isolate variables within functions while using same variable names.

Since scope is really powerful as well as dangerous when misusing it, there are identified best practices to avoid surprises. All these best practices are targeting common objectives such as avoiding spoiling global scope, isolate variables in function scope as much as possible. Following are some of such best practices.

Isolate in objects

It's a best practice to create an object in the global namespace and define variables and functions as properties of the object. In this way, we will only define a limited number of variables in the global scope and all the other variables will be properties of carrier objects.

var myLibrary = {
  foo: function() {
    console.log('this is foo');
  },
  x: 5
}

myLibrary.foo(); // will print "this is foo"


Module Managment

Most of the modern JS libraries wraps its variables in a module and expose via dependency management. When using these libraries in code, we have to import scope into another object scope which will prevent spoiling the global scope. 

Will be covered more under Closure section.

Functions as Scope

When we have named functions, it pollutes the global name space. That can be avoided with following technique by treating function as an expression rather than a declaration.


(function foo() {
    console.log('How are you');
})(); // will print "How are you"


Also you can pass a variable into the function as follows.

(function foo(x) {
   console.log(x);
})("Hi"); // will print "Hi"


In above code first () make the function an expression and second () execute the expression.

Anonymous vs Named functions

Function expression can be anonymous but function declaration cannot be anonymous. It's prefer to have named function which has less drawbacks than anonymous functions. 

Following are valid syntaxes because there the function is an expression.

var x = function(x) {
   console.log(x);
}

x('wow'); // will print "wow"


(function(x) {
   console.log(x);
}
)('wow'); // will print "wow"

Following syntax is invalid because the there we use a function declaration.

function(x) {
   console.log(x);
}


Blocks as Scope

Generally if we define a variable with var keyword it belongs to the scope beyond the block and applicable for the whole scope.


if (true) {
  var n = "How are you";
}

console.log(n); // will print "How are you"


In above code variable "n" will belong to global scope. That happens becuase of variable hoisting in JS engine.

Variable Hoisting

Variables and function declarations are move to the top of the scope block. In here subsequent function declarations will override the previous.



a = 2;
var a;
console.log(a) // will print 2

console.log(a); // will print undefined
a = 2;
var a;

But if you define a variable with "let" keyword, it'll only belong to the current block. Variable declarations with "let" keyword will not effect to variable hoisting.


if (true) {
  let m = "How are you";
}

console.log(m); // will give an error

You can create a constant variable using const keyword and it's scope will be limited to the current block.


if (true) {
  const m = "How are you";
  m = "Changed"; // will give an error

}

Closure



What is Closure?

Closure is when a function is able to remember and use it's lexical scope even when the function is executing outside of the lexical scope.

Closure is one of the least understood features  in JS, but it is one of the most powerful features in JS. 

function foo(a)
  function bar() {
    console.log(a);
  }
  return bar;
}

var baz = foo(2);
baz() // will print 2


In above example, "bar" still has the reference to the scope of a function, and that reference is called closure.

Closure is heavily used in JS programming all over the programs. One of it's main application is there when developers create modules to isolate and expose variables and functions to outside.

Modules

Function return an object with references for internal functions. Those function will run outside the defined lexical scope and it's a good example for a closure.



function myModule(a, b) {
  function sum() {
    return a +b;
  }

  function substract() {
    return a - b;
  }

  return {
    sum: sum,
    substract: substract
  }
}

var operations = myModule(2, 4);
operation.sum() // Will print 6

This is a great way to keep the global scope clean as well expose necessary components to outside while maintaining an internal state of the module.


Now we have come to the end of this article about Javascript Scope and Closure concepts. Hope you got some quick interesting points about them and if you need further readings it's highly recommend to read the series of  "You Don't know Javascript" books. I'll bring some more interesting points in my next article about Javascript Objects and Prototyping. 

Wednesday, July 13, 2016

Rethink your API with GraphQL


Rest is the underline architecture of Web and how the clients and servers communicate each other. Using Rest, clients and servers can communicate each other without a prior knowledge. Because of the power of Rest principles, over the time software architects have grabbed the core principles from it to implement APIs in applications. Following are some of the core principles in Restful APIs.

  1. It works over HTTP protocol which is stateless
  2. The architecture fully operates with the concept of individual Resources which are identified uniquely through URIs.
  3. Resources can be manipulated using basic HTTP verbs such as GET, PUT, PATCH...
Since Restful APIs fully focused on individual resources, there are several consequences you'll face other than all the advantages you are getting with Restful APIs.
  1. Since the resource manipulation is done through HTTP verbs, the ability to perform complex manipulations containing multiple different types of resources is not straightforward. There may be situations where clients needed to have multiple roundtrips to perform such an operation.
  2. In order to manipulate multiple related resources, clients will have to maintain the state between operations through headers or other way.
  3. Service discovery and client tooling will be difficult due to lack of a strong type system at back end.
  4. Since resource representation should be generic at the API and all the clients will share the same representation, there won't be any control on the amount of transfer data to clients(over fetching).
When it comes to mobile clients, the disadvantages such as multiple roundtrips, over fetching data will be really critical because those are used in low bandwidth environments.

When Facebook has started their mobile apps in 2012, initially they have decided to use a Restful API to communicate with backend services. With drastic development cycles, they have found that many versions of their mobile app are out there in use(one in each two weeks) which calls to the same backend service runs on Rest architecture. This has lead to serious issues on managing and maintaining their API to support different versions of the client apps. A set of engineers at facebook have started to find a new way of overcoming these problems and GraphQL has come out as a result of that effort.



What is GraphQL?

A data query language for client applications defines a flexible way to request and provide data requirements. It allows developers to define the data requirement as same as they think about the data.

Following are some of the main advantages in GraphQL over Restful APIs.

  1. Hierarchical: Most product development today involves the creation and manipulation of view hierarchies. To achieve congruence with the structure of these applications, a GraphQL query itself is a hierarchical set of fields. The query is shaped just like the data it returns. It is a natural way for product engineers to describe data requirements.
  2. Product-centric: GraphQL is unapologetically driven by the requirements of views and the front-end engineers that write them. We start with their way of thinking and requirements and build the language and runtime necessary to enable that.
  3. Client-specified queries: In GraphQL, the specification for queries are encoded in the client rather than the server. These queries are specified at field-level granularity. In the vast majority of applications written without GraphQL, the server determines the data returned in its various scripted endpoints. A GraphQL query, on the other hand, returns exactly what a client asks for and no more.
  4. Backwards Compatible: In a world of deployed native mobile applications with no forced upgrades, backwards compatibility is a challenge. Facebook, for example, releases apps on a two week fixed cycle and pledges to maintain those apps for at least two years. This means there are at a minimum 52 versions of our clients per platform querying our servers at any given time. Client-specified queries simplifies managing our backwards compatibility guarantees.
  5. Structured, Arbitrary Code: Query languages with field-level granularity have typically queried storage engines directly, such as SQL. GraphQL instead imposes a structure onto a server, and exposes fields that are backed by arbitrary code. This allows for both server-side flexibility and a uniform, powerful API across the entire surface area of an application.
  6. Application-Layer Protocol: GraphQL is an application-layer protocol and does not require a particular transport. It is a string that is parsed and interpreted by a server.
  7. Strongly-typed: GraphQL is strongly-typed. Given a query, tooling can ensure that the query is both syntactically correct and valid within the GraphQL type system before execution, i.e. at development time, and the server can make certain guarantees about the shape and nature of the response. This makes it easier to build high-quality client tools.
  8. Introspective: GraphQL is introspective. Clients and tools can query the type system using the GraphQL syntax itself. This is a powerful platform for building tools and client software, such as automatic parsing of incoming data into strongly-typed interfaces. It is especially useful in statically typed languages such as Swift, Objective-C and Java, as it obviates the need for repetitive and error-prone code to shuffle raw, untyped JSON into strongly-typed business objects.
Following video explains the syntax in GraphQL.


 GraphQL opens up and talk about very powerful stream of concepts which allows us to rethink about our API designs. Even though it won't replace Restful APIs in all the contexts, there is a high chance of immediately dominating how the backends get exposed to the mobile clients.

Following are some resources to learn more about GraphQL.

Monday, August 13, 2012

Managing Computer Vision Syndrome


Computer has became the most influential invention which was the seed for many other concurrent inventions. There are about 1 billion computer users all over the world. Most of them works with the computer as there profession. Out of them majority are suffering from serious eye issues which is called as computer vision syndrome.

There are many causes for this syndrome and most of them are there because of the carelessness of the computer users. Some of them are,

  1. Not taking rests for your eyes for a long time
  2. Not using proper lighting for the computer screen and background
  3. Not maintaining proper distances with the screen and the eyes.
With all these bad habits your eyesight is getting worst day by day. This will end up with serious damages to your eyes in the rest of your life. 

But you can overcome it by following simple techniques as a habit while using the computer. Those are  as follows.
  • Get a computer eye exam to check for dry eyes and vision

  • Use Proper lighting



  • Minimize glare


  • Brightness of the screen should be same as your work environment

  • Text size should be three times the smallest text size you can read from your normal viewing position


  • Make blinking more often a habit(otherwise your eyes get dry because of less blinking)

  • Relax your eyes
    • Every 20mins look at object far away(at least 20 feats away from at least 20 seconds)
    • Or close your eyes for 20 seconds

  • Take frequent breaks
    • Micro breaks: 20/20/20 rule
    • Mini breaks: 1-2 hourly for few minutes, stand up and stretch
    • Maxi breaks: This could be a coffee of a lunch break
Out of all these 20/20/20 rule is the most exiting rule which I have heard and which is mostly missing from computer users. 

What is 20/20/20 Rule?

    Every 20 minutes look at an object which is located 20 feats away for 20 seconds

This gives a huge rest to your eyes and allow your eye muscles to focus on distance objects which will preserve normal operations in an eyes. But most of the people don't get remind to do this when they are stuck with work.

You can find a simple reminder program for your machine and allow it to remind you for every 20mins to have a rest. After you get used to, it will be automatically done by you. 
For Ubuntu users I have written a small bash script which will remind you every 20mins to have a rest.  You can download it and follow the instructions in the Readme file. 

Start protecting your eyes which will never comeback after get damaged. 

Saturday, May 12, 2012

Improve Performance by Profiling Web Applications


Since web seems to be a trend these days most of the new comers try to make alive their fascinating business ideas through web applications. They use nice UI components for their application to make it attractive. Initially they don't worry about the performance of the application because they won't feel it makes a huge impact at this moment. Even though your application is not an optimized design, you won't see it for a small set of data. But it really matters while your application grows. If there are some non optimized plugins, classes or functions running in your application it may slow down your application when it handles large data sets.

At this point you may wonder how to deal with this. If your application has only couple of classes you will be able to go through the code and identify bottlenecks in your application. But what if it is a application with 100+ classes. You can't go through it manually. That's where "Profiling Tools" act as life savers of developers. 

Profiling tools debug the web application and keep the records of your application at run time. Then it interpret those records to visualize execution process of each code segment in module level, class level and function level.  Using these output developer can identify 
  • what is the hierarchy of classes/ functions running on this action
  • How may time each class/function get called by parents
  • what are the execution time of each segment.
Bases on this developer can see what are the classes/functions which took most of the execution time in your applications and identify performance bottlenecks of the application. To get a basic idea on profiling tools, I'll explain how to profile a PHP application with Xdebug and KCacheGrind.

Xdebug is a PHP extension which facilitate the developer to debug and profile web applications. While you running the application xdebug tack all the events running on the application(by tracking the server) and log it to a file. This file contains stack trace information, function trace information, memory allocation and many more useful information. We can visualize those data using a visualize tools.  

KCacheGrind is a data visualization tool which is used as a profiler frontend. We can use KCacheGrind to open the Xdebug output file and analyze its output in a meaningful manner. 

Following are the simple steps to install Xdebug and KCachegrind in Ubuntu to profile your application. 

Install Xdebug

Install it via pecl is easy and reliable. 
pecl install xdebug

Enable it by adding the following file path to php.ini or xdebug.ini file.(Use full path)
zend_extension=/usr/lib/php5/20090626/xdebug.so

Then activate debugging by enabling following features in xdebug.ini file or php.ini file. 

Enable all requests to profiling
xdebug.profiler_enable = 1

In some occasions you may need to skip some requests and only profile selected requests. To enable only GET/POST requests or COOKIE with name XDEBUG_PROFILE you have to enable following property.
xdebug.profiler_enable_trigger = 1
Following properties are also important to make your life easy with xdebug. 

Append the output at each time when same request comes to the server. 
xdebug.profiler_append=1

Name of output files
xdebug.profiler_output_name = cachegrind.out.%s

Name of the output directory (This directory should have write permissions).
xdebug.profiler_output_dir=/tmp/xdebug

Now you are done with configuration process. Restart the web server to apply your  changes. 
/etc/init.d/apache2 restart

Install KCacheGrind

Install it via the OS package manager. 
apt-get install kcachegrind

There are some alternatives to KCacheGrind. WebGrind is a web based tool which has subset of features of KCacheGrind. But KCacheGrind is rich in features so I recommend it. 

How to profile your application

Now you are armed with enough tools to profile your application. Now access your web application through the browser and run the necessary page which needs to be profile. After a success full page load, you can find the corresponding xdebug output file in /tmp/xdebug directory. Now you can open it using KCacheGrind to visualize the output. 

kcachegrind
As an example run the following code on your browser to examine output. 

//test.php
function innerLoop($limit) {
    for($count = 0; $count < $limit; $count++) {
        echo "*";
    }
}   
  
function outPut($rows) {
    for($count = 1; $count <= $rows; $count++) {
        innerLoop($count);
        echo "
";
    }
}

outPut(10); 

Output of xdebug will be as follows.


Flat Profile will list all the events run in your application as follows. By default events are sorted in the order of time taken.



Following are the explanation on the values displayed in this table. 
  1. Incl - Total value that event took
  2. Self - Time spent only within that event
  3. Called - Number of times that event get called
  4. Function - Function name
  5. Location - File
And also KCacheGrind provide a execution graph which simplify above data. You can get that by selecting "Call Graph" tab  of each event. 


With these you can find the time each event spent on executing, the hierarchy of events and how many times each event get called by other. This is a nice interpretation which help developer to find issues is huge web systems. When we consider large systems which were build on frameworks as Zend or Symfony, there are number of utility functions running in the background. So the "Flat Profile" will be too complicated to get an quick decisions. Following screenshot is a part of the flat profile of an application which was built on top of Symfony. 


This is too complected and most of the events are related to framework. We can use "Call Graph" in similar situations to make our life easy.  Following is a part of the "Call Graph" of it. 


This nicely shows the execution path and you can get quick decisions based on this. 

Likewise there are more advance options in KCacheGrind which helps you to profile your application. Tryout those options and find most suitable options to profile your application which will lead to have a smart web application at the end of the day. 

Saturday, March 17, 2012

How to use Mysql Transactions with PHP

Web applications are more popular today than ever with the increasing number of internet users. Most of the standard alone applications converted as web based applications or at least they try to provide a web interface for users. PHP and Mysql are two leading technologies which allow uses on rapid development of web based systems. "Transaction" is a powerful concept which comes with Mysql 4.0 and above versions. Lets explore that.

                                             

Web based applications have different types of requirements and some of them prefer more consistency on its data than others. As an example Banking software expect its data to be more secure and consistency. Assume a situation where one person (Bob) is transferring some amount of money (XXX) to his friends bank account (Alice) directly form account to account. The queries for the banking software database on this transaction will be as follows.

  1. Deduct XXX amount form Bobs account.
  2. Add XXX amount for Alices account.
If both queries executed successfully, it will be great. But assume a situation where the first query executed successfully and then the database connection get lost. At that situation the second query will not run on top of that database, so nothing will add to the Alices account. The deducted amount form the Bobs account will also not get recovered. These will make a disaster for Bob, Alice and for the Bank.

"Transactions" can solve this problem with a great solution which will ensure the consistency of the database states. Simply with Transactions the database can be role back to the initial state where it was before the deduction query executed on the database. Its like this.

  1. Transaction Start
  2. Deduct XXX amount form Bobs account.
  3. Add XXX amount for Alices account.
  4. If both queries run successfully then commit the changes
  5. Otherwise role back to the initial state.
For these operations Mysql provide several commands with it and lets see how we can implement that with PHP.

For support Mysql transactions PHP 5.0 and above provides set of methods which are very similar to the normal mysql functions but with an additional "i" letter with them. The most important functions out of them are as follows.

  • mysqli_connect() - connect with mysql
  • mysqli_select_db() - select a database
  • mysqli_autocommit() - enable and disable the auto commit option
  • mysqli_query() - run the mysql query
  • mysqli_rollback() - role back the database to the initial status
  • mysqli_commit() - commit all the changes to the database
  • mysqli_close() - close the mysql connection
Lets look at how you implement a transaction as a solution for the above problem.

Since transaction only supports from InnoDB storage engine you have to make sure all the database table are with InnoDB engine in your database.


$host = 'localhost';
$user = 'username';
$password = 'password';
$db = 'transaction';

$con = mysqli_connect($host, $user, $password);
mysqli_select_db($con, $db);

//Cancel auto commit option in the database
mysqli_autocommit($con, FALSE);

//You can write your own query here
$query1 = "query for deduct XXX amount form Bobs account";
$results[] = mysqli_query($con, $query1);

//You can write your own query here
$query2 = "query for add XXX amount for Alices account";
$results[] = mysqli_query($con, $query2);
$sucess = true;

foreach( $results as $result) {
if(!$result) {
$sucess = false;
}
}

if(!$sucess) {
mysqli_rollback($con);
} else {
mysqli_commit($con);
}
mysqli_close($con);

In above code the changes will done temporary because we have disabled the auto commit option in the database. After the two queries executed it will check for the results of those queries. If some thing went wrong with the database connection in the middle of the operations the changes will not be permanently applied to the database. And at the end it checks for the success of all queries and if all went fine it will commit the changes to the database.

With this code you will be able to make a consistent database operation for the money transfer action in your database. So your done with set of great consistent database operations on your database with PHP and Mysql.

Still Mysql allows only few operations such as update, insert and delete to be role back using its "Transactions". Following operations cannot be rolled back using Mysql.

  • CREATE DATABASE
  • ALTER DATABASE
  • DROP DATABASE
  • CREATE TABLE
  • ALTER TABLE
  • DROP TABLE
  • RENAME TABLE
  • TRUNCATE TABLE
  • CREATE INDEX
  • DROP INDEX
  • CREATE EVENT
  • DROP EVENT
  • CREATE FUNCTION
  • DROP FUNCTION
  • CREATE PROCEDURE
  • DROP PROCEDURE
You have to be aware on these limitations before you use "Transactions" for your application. Have a great time with Mysql "Transactions".

References :