Eclipse in ubuntu

Eclipse is a multi-language Integrated development environment (IDE) comprising a base workspace and an extensible plug-in system for customizing the environment. It is written mostly in Java. It can be used to develop applications in Java and, by means of various plug-ins, other programming languages including Ada, C, C++, COBOL, Fortran, Haskell, JavaScript, Perl, PHP, […]

Eclipse is a multi-language Integrated development environment (IDE) comprising a base workspace and an extensible plug-in system for customizing the environment. It is written mostly in Java. It can be used to develop applications in Java and, by means of various plug-ins, other programming languages including Ada, C, C++, COBOL, Fortran, Haskell, JavaScript, Perl, PHP, Python, R, Ruby (including Ruby on Rails framework), Scala, Clojure, Groovy, Scheme, and Erlang. It can also be used to develop packages for the software Mathematica. Development environments include the Eclipse Java development tools (JDT) for Java and Scala, Eclipse CDT for C/C++ and Eclipse PDT for PHP, among others.

The initial codebase originated from IBM VisualAge.[2] The Eclipse software development kit (SDK), which includes the Java development tools, is meant for Java developers. Users can extend its abilities by installing plug-ins written for the Eclipse Platform, such as development toolkits for other programming languages, and can write and contribute their own plug-in modules.

Released under the terms of the Eclipse Public License, Eclipse SDK is free and open source software (although it is incompatible with the GNU General Public License[3]). It was one of the first IDEs to run under GNU Classpath and it runs without problems under IcedTea.

Ubuntu, here are some steps that help you getting Eclipse working on Ubuntu

1. Install Sun Java JDK

#sudo apt-get install sun-java6-jdk

2.  Download Eclipse
You can go to official site http://www.eclipse.org/downloads/ and choose your edition,

Save to your Desktop

3. Extract Eclipse
Open Terminal, and execute:

#cd ~/Desktop
#tar xzf eclipse-php-galileo-linux-gtk.tar.gz (replace your downloaded file name here)
#sudo mv eclipse /opt/eclipse
#sudo mv eclipse-galileo.png /opt/eclipse
#cd /opt
#sudo chown -R root:root eclipse
#sudo chmod -R 755 eclipse
#cd /opt/eclipse
#sudo chmod +x eclipse

4. Create a .desktop file to eclipse:

gedit ~/.local/share/applications/opt_eclipse.desktop

Then, paste this inside (dont forget to edit Exec and Icon values):

[Desktop Entry]
Type=Application
Name=Eclipse
Comment=Eclipse Integrated Development Environment
Icon=** something like /opt/eclipse/icon.xpm **
Exec= ** something like /opt/eclipse/eclipse **
Terminal=false
Categories=Development;IDE;Java;
StartupWMClass=Eclipse

After that, open that folder with nautilus:

nautilus ~/.local/share/applications

If you want to use this launcher outside dash/launcher (ex: as a desktop launcher) you need to add execution permission by right clicking the file and choosing Properties -> Permissions -> Allow execution, or, via the command-line:

chmod +x ~/.local/share/applications/opt_eclipse.desktop

Finally drop opt_eclipse.desktop to launcher.


Uploaded on Oct 29, 2011

A short walkthrought of the Eclipse Software Development Kit.

Plugins used in this video:
1. PHPEclipse (http://www.phpeclipse.com/)
2. Aptana Studio (http://www.aptana.com/)
3. Subversive (http://www.eclipse.org/subversive/)

Uploaded on Nov 24, 2011

Tutorial showing installation, requirements and configuration of Eclipse itself and the PHPEclipse plug-in.

Link mentioned in the video regarding line endings: http://www.evolt.org/node/60247 (scroll to Linefeeds part)

Published on Mar 16, 2013

A short tutorial outlining the features of PHPEclipse.

 

Published on Mar 22, 2013

A quick walkthrough on all the goodies Aptana plugin for Eclipse provides when editing HTML, CSS and JavaScript code.

Link about Java 7 and FTP problems on Windows 7+ mentioned in the video: http://stackoverflow.com/questions/69…

 

Published on Apr 3, 2013

Quick tips and tricks to help you effectively tackle the most redundant activities during development – including extra safeguard tip using the Local History.

 

Published on May 10, 2013

Presentation of 2 ways I know of to work with FTP and synchronization in Eclipse:

1. utilizing Aptana’s remote synchronization (http://www.aptana.com)
2. using the not-yet-so-deprecated FTP and WebDav Eclipse plugin (http://jcraft.com, http://eclipse.jcraft.com)

Published on May 26, 2013

Quick introduction to remote versioning systems with a peek into Eclipse’s SVN interface and TortoiseSVN program.

Link to SourceForge: https://sourceforge.net/
Link to GitHub: https://github.com/
Link to the Timeline: Inventions project: https://sourceforge.net/projects/time…

Unity launchers

Unity Launchers are actually files stored in your computer, with a ‘.desktop’ extension. In earlier Ubuntu versions, these files were simply used so as to launch a specific application, but in Unity they are also used so as to create right-click menus for each application, which you can access from the Unity Launcher. This article […]

Unity Launchers are actually files stored in your computer, with a ‘.desktop’ extension. In earlier Ubuntu versions, these files were simply used so as to launch a specific application, but in Unity they are also used so as to create right-click menus for each application, which you can access from the Unity Launcher.

This article describes how to create a working .desktop file for general use, but also how to add it to the Unity Launcher and/or how to edit a Unity Launcher itself, by editing its fields or by adding a right-click menu to it.

 

Creating a working .desktop file

There are currently 2 ways of creating a desktop file. The 1st one is using a text editor, like Gedit, and the 2nd one is installing a program (gnome-panel) or using ‘alacarte’ that both do the job for you. The former lets you “control” your launcher more than the latter, but the latter way is easier. Please note that this section will cover only the basics, not how to add shortcuts to your launcher. For this, please head to Adding shortcuts to a launcher.

Using a text editor

Open your favourite text editor, like Gedit or nano, and type in (copy and paste):

[Desktop Entry]
Version=x.y
Name=ProgramName
Comment=This is my comment
Exec=/home/alex/Documents/exec.sh
Icon=/home/alex/Pictures/icon.png
Terminal=false
Type=Application
Categories=Utility;Application;

These lines are enough for describing a simple launcher. Each launcher (.desktop file) consists of some basic fields.

  • Version is the version of this .desktop file.
  • Name is the name of the application, like ‘VLC media player’.
  • Comment is a phrase or two describing what this program does, like ‘Plays your music and videos files’.
  • Exec is the path to the executable file. The full path to the executable file must be used only in case it isn’t in any of the paths specified in the $PATH variable. For example, any files that are inside the path /usr/bin don’t need to have their full path specified in the Exec field, but only their filename. To see all the paths in the $PATH variable you can open a terminal using Ctrl+Alt+T and type in
    echo $PATH
  • Icon field is the icon that should be used by the launcher and represents the application. All icons that are under the directory /usr/share/pixmaps don’t need to have their full path specified, but their filename without the extension. For example, if the icon file is /usr/share/pixmaps/wallch.png, then the Icon field should be just ‘wallch’. All other icons should have their full path specified.
  • Terminal field specifies whether the application should run in a terminal window or not.
  • Type field specifies the type of the launcher file. The type can be Application, Link or Directory, but this article covers the ‘Application’ type.
  • Categories field specifies the category of the application. It is used by the Dash so as to categorize the applications. A launcher being a ‘Utility;Application;’ should be under the ‘Accessories’ section etc.

A realistic example of how a .desktop file looks like is the following:

[Desktop Entry]
Version=1.0
Name=BackMeUp
Comment=Back up your data with one click
Exec=/home/alex/Documents/backup.sh
Icon=/home/alex/Pictures/backup.png
Terminal=false
Type=Application
Categories=Utility;Application;

One last thing to add is that by setting executable rights to your .desktop file, it automatically takes the specified Icon and Name (specified in the corresponding fields), as it should be. Be careful though, the filename doesn’t really change, it still remains ‘launcher_name_here.desktop’ and not ‘Name_field_here’, the system chooses to display it like ‘Name_field_here’ because it’s nicer without the .desktop extension.

Adding a .desktop file to the Unity Launcher

In order to add your launcher to the Unity Launcher on the left, you have to place your .desktop file at /usr/share/applications/ or at ~/.local/share/applications/. After moving your file there, search for it in the Dash (Windows key -> type the name of the application) and drag and drop it to the Unity Launcher. Now your launcher (.desktop file) is locked on the Unity Launcher! If your desktop file cannot be found by doing a search from the Dash, you may need to read on…

To be more certain that your .desktop file will work properly, use the desktop file validator, which will notify you of any errors or omissions. If there are no errors, desktop-file-validator will exit silently.

Once the file validates correctly, install it to the default location (probably /usr/share/applications) using the desktop-file-install program. This step may require superuser privileges. The desktop-file-install program may add some lines of its own to your .desktop file. There is no need to have the .desktop file be executable by anyone.

Please note that desktop-file-validate tends to be oversensitive at times, which means that it can output error messages on perfectly working .desktop files. Those error messages should be better seen as warnings rather than anything else. For more information on desktop entry specification please refer to http://standards.freedesktop.org/desktop-entry-spec/latest/

perl

Perl is a family of high-level, general-purpose, interpreted, dynamic programming languages. The languages in this family include Perl 5 and Perl 6.[4] Though Perl is not officially an acronym,[5] there are various backronyms in use, such as: Practical Extraction and Reporting Language.[6] Perl was originally developed by Larry Wall in 1987 as a general-purpose Unix […]

Perl is a family of high-level, general-purpose, interpreted, dynamic programming languages. The languages in this family include Perl 5 and Perl 6.[4]

Though Perl is not officially an acronym,[5] there are various backronyms in use, such as: Practical Extraction and Reporting Language.[6] Perl was originally developed by Larry Wall in 1987 as a general-purpose Unix scripting language to make report processing easier.[7] Since then, it has undergone many changes and revisions. The latest major stable revision of Perl 5 is 5.18, released in May 2013. Perl 6, which began as a redesign of Perl 5 in 2000, eventually evolved into a separate language. Both languages continue to be developed independently by different development teams and liberally borrow ideas from one another.

The Perl languages borrow features from other programming languages including C, shell scripting (sh), AWK, and sed.[8] They provide powerful text processing facilities without the arbitrary data-length limits of many contemporary Unix tools,[9] facilitating easy manipulation of text files. Perl 5 gained widespread popularity in the late 1990s as a CGI scripting language, in part due to its parsing abilities.[10]

In addition to CGI, Perl 5 is used for graphics programming, system administration, network programming, finance, bioinformatics, and other applications. It’s nicknamed “the Swiss Army chainsaw of scripting languages” because of its flexibility and power,[11] and possibly also because of its perceived “ugliness”.[12] In 1998, it was also referred to as the “duct tape that holds the Internet together”, in reference to its ubiquity and perceived inelegance.[13]

Perl was originally named “Pearl”. Wall wanted to give the language a short name with positive connotations; he claims that he considered (and rejected) every three- and four-letter word in the dictionary. He also considered naming it after his wife Gloria. Wall discovered the existing PEARL programming language before Perl’s official release and changed the spelling of the name.[36]

When referring to the language, the name is normally capitalized (Perl) as a proper noun. When referring to the interpreter program itself, the name is often uncapitalized (perl) because most Unix-like file systems are case-sensitive. Before the release of the first edition of Programming Perl, it was common to refer to the language as perl; Randal L. Schwartz, however, capitalized the language’s name in the book to make it stand out better when typeset. This case distinction was subsequently documented as canonical.[37]

There is some contention about the all-caps spelling “PERL”, which the documentation declares incorrect[37] and which some core community members consider a sign of outsiders.[38] The name is occasionally expanded as Practical Extraction and Report Language, but this is a backronym.[39] Other expansions have been suggested as equally canonical, including Wall’s own humorous Pathologically Eclectic Rubbish Lister.[40] Indeed, Wall claims that the name was intended to inspire many different expansions.[41]

The Comprehensive Perl Archive Network (CPAN) currently has 121,260 Perl modules in 27,769 distributions, written by 10,733 authors, mirrored on 270 servers.

The archive has been online since October 1995 and is constantly growing.

CPAN, the Comprehensive Perl Archive Network, is an archive of over 114,000 modules of software written in the Perl programming language, as well as documentation for them.[1] It has a presence on the World Wide Web at www.cpan.org and is mirrored worldwide at more than 200 locations.[2] CPAN can denote either the archive network itself, or the Perl program that acts as an interface to the network and as an automated software installer (somewhat like a package manager). Most software on CPAN is free and open source software.[3] CPAN was conceived in 1993, and the first web-accessible mirror was launched in January 1997.[4]

Like many programming languages, Perl has mechanisms to use external libraries of code, making one file contain common routines used by several programs. Perl calls these modules. Perl modules are typically installed in one of several directories whose paths are placed in the Perl interpreter when it is first compiled; on Unix-like operating systems, common paths include /usr/lib/perl5, /usr/local/lib/perl5, and several of their subdirectories.

Perl comes with a small set of core modules. Some of these perform bootstrapping tasks, such as ExtUtils::MakeMaker, which is used for building and installing other extension modules; others, like CGI.pm, are merely commonly used. The authors of Perl do not expect this limited group to meet every need, however.

The CPAN’s main purpose is to help programmers locate modules and programs not included in the Perl standard distribution. Its structure is decentralized. Authors maintain and improve their own modules. Forking, and creating competing modules for the same task or purpose is common. There is no formal bug tracking system, but there is a third-party bug tracking system that CPAN designated as the suggested official method of reporting issues with modules. Continuous development on modules is rare; many are abandoned by their authors, or go years between new versions being released. Sometimes a maintainer will be appointed to an abandoned module. They can release new versions of the module, and accept patches from the community to the module as their time permits. CPAN has no revision control system, although the source for the modules is often stored on GitHub. Also, the complete history of the CPAN and all its modules is available as the GitPAN project, allowing to easily see the complete history for all the modules and for easy maintenance of forks. CPAN is also used to distribute new versions of Perl, as well as related projects, such as Parrot.

The CPAN is an important resource for the professional Perl programmer. With over 23,000 modules (containing 20,000,000 lines of code) as of July 2011, the CPAN can save programmers weeks of time, and large Perl programs often make use of dozens of modules. Some of them, such as the DBI family of modules used for interfacing with SQL databases, are nearly irreplaceable in their area of function; others, such as the List::Util module, are simply handy resources containing a few common functions.

Files on the CPAN are referred to as distributions. A distribution may consist of one or more modules, documentation files, or programs packaged in a common archiving format, such as a gzipped tar archive or a ZIP file. Distributions will often contain installation scripts (usually called Makefile.PL or Build.PL) and test scripts which can be run to verify the contents of the distribution are functioning properly. New distributions are uploaded to the Perl Authors Upload Server, or PAUSE (see the section Uploading distributions with PAUSE).

In 2003, distributions started to include metadata files, called META.yml, indicating the distribution’s name, version, dependencies, and other useful information; however, not all distributions contain metadata. When metadata is not present in a distribution, the PAUSE’s software will usually try to analyze the code in the distribution to look for the same information; this is not necessarily very reliable.

With thousands of distributions, CPAN needs to be structured to be useful. Distributions on the CPAN are divided into 24 broad chapters based on their purpose, such as Internationalization and Locale; Archiving, Compression, And Conversion; and Mail and Usenet News. Distributions can also be browsed by author. Finally, the natural hierarchy of Perl module names (such as “Apache::DBI” or “Lingua::EN::Inflect”) can sometimes be used to browse modules in the CPAN.

CPAN module distributions usually have names in the form of CGI-Application-3.1 (where the :: used in the module’s name has been replaced with a dash, and the version number has been appended to the name), but this is only a convention; many prominent distributions break the convention, especially those that contain multiple modules. Security restrictions prevent a distribution from ever being replaced, so virtually all distribution names do include a version number.

There is also a Perl core module named CPAN; it is usually differentiated from the repository itself by using the name CPAN.pm. CPAN.pm is mainly an interactive shell which can be used to search for, download, and install distributions. An interactive shell called cpan is also provided in the Perl core, and is the usual way of running CPAN.pm. After a short configuration process and mirror selection, it uses tools available on the user’s computer to automatically download, unpack, compile, test, and install modules. It is also capable of updating itself.

More recently, an effort to replace CPAN.pm with something cleaner and more modern has resulted in the CPANPLUS (or CPAN++) set of modules. CPANPLUS separates the back-end work of downloading, compiling, and installing modules from the interactive shell used to issue commands. It also supports several advanced features, such as cryptographic signature checking and test result reporting. Finally, CPANPLUS can uninstall a distribution. CPANPLUS was added to the Perl core in version 5.10.0.

Both modules can check a distribution’s dependencies and can be set to recursively install any prerequisites, either automatically or with individual user approval. Both support FTP and HTTP and can work through firewalls and proxies.

Install all dependent packages for CPAN

sudo apt-get install build-essential

Invoke the cpan command as a normal user

cpan

Once you hit on enter for “cpan” to execute, you be asked of some few questions. To make it simple for yourself, answer “no” for the first question so that the latter ones will be done for you automatically.

Enter the commands below

make install
install Bundle::CPAN

Now all is set and you can install any perl module you want.

Type o conf init to reconfigure cpan.

The Best Perl Programmers Use Modern Perl

by chromatic

In 1987, Perl 1.0 changed the world. In the decades since then, the language has grown from a simple tool for system administration somewhere between shell scripting and C programming to a powerful, general purpose language steeped in a rich heritage.

Even so, most Perl 5 programs in the world take far too little advantage of the language. You can write Perl 5 programs as if they were Perl 4 programs (or Perl 3 or 2 or 1), but programs written to take advantage of everything amazing the worldwide Perl 5 community has invented, polished, and discovered are shorter, faster, more powerful, and easier to maintain than their alternatives.

They solve difficult problems with speed and elegance. They take advantage of the CPAN and its unparalleled library of reusable code. They get things done.

This productivity can be yours, whether you’ve dabbled with Perl for a decade or someone just handed you this book and said “Fix this code by Friday.”

Modern Perl is suitable for programmers of every level. It’s more than a Perl tutorial—only Modern Perl focuses on Perl 5.12 and 5.14, to demonstrate the latest and most effective time-saving features. Only Modern Perl explains how and why the language works, to let you unlock the full power of Perl.

Hone your skills. Sharpen your knowledge of the tools and techniques that make Perl so effective. Master everything Perl has to offer.

When you have to solve a problem now, reach for Perl. When you have to solve a problem right, reach for Modern Perl.

Visit the companion website at Modern Perl Books or read Modern Perl: the Book online.

Modern Perl installations include two clients to connect to, search, download, build, test, and install CPAN distributions, CPAN.pm and CPANPLUS. For the most part, each of these clients is equivalent for basic installation. This book recommends the use of CPAN.pm solely due to its ubiquity. With a recent version (as of this writing, 1.9800 is the latest stable release), module installation is reasonably easy. Start the client with:

    $ cpan

To install a distribution within the client:

    $ cpan
    cpan[1]> install Modern::Perl

… or to install directly from the command line:

    $ cpan Modern::Perl

Eric Wilhelm’s tutorial on configuring CPAN.pm http://learnperl.scratchcomputing.com/tutorials/configuration/ includes a great troubleshooting section.

cURL

cURL is a computer software project providing a library and command-line tool for transferring data using various protocols. The cURL project produces two products, libcurl and cURL. It was first released in 1997. curl is a command line tool for transferring data with URL syntax, supporting DICT, FILE, FTP, FTPS, Gopher, HTTP, HTTPS, IMAP, IMAPS, […]

cURL is a computer software project providing a library and command-line tool for transferring data using various protocols. The cURL project produces two products, libcurl and cURL. It was first released in 1997.

curl is a command line tool for transferring data with URL syntax, supporting DICT, FILE, FTP, FTPS, Gopher, HTTP, HTTPS, IMAP, IMAPS, LDAP, LDAPS, POP3, POP3S, RTMP, RTSP, SCP, SFTP, SMTP, SMTPS, Telnet and TFTP. curl supports SSL certificates, HTTP POST, HTTP PUT, FTP uploading, HTTP form based upload, proxies, cookies, user+password authentication (Basic, Digest, NTLM, Negotiate, kerberos…), file transfer resume, proxy tunneling and a busload of other useful tricks.

Working with HTTP from the command-line is a valuable skill for HTTP architects and API designers to have. The cURL library and curl command give you the ability to design a Request, put it on the pipe, and explore the Response. The downside to the power of curl is how much breadth its options cover. Running curl --help spits out 150 different flags and options. This article demonstrates nine basic, real-world applications of curl.

In this tutorial we’ll use the httpkit echo service as our end point. The echo server’s Response is a JSON representation of the HTTP request it receives.

Make a Request

Let’s start with the simplest curl command possible.

Request
curl http://echo.httpkit.com
Response
{
  "method": "GET",
  "uri": "/",
  "path": {
    "name": "/",
    "query": "",
    "params": {}
  },
  "headers": {
    "host": "echo.httpkit.com",
    "user-agent": "curl/7.24.0 ...",
    "accept": "*/*"
  },
  "body": null,
  "ip": "28.169.144.35",
  "powered-by": "http://httpkit.com",
  "docs": "http://httpkit.com/echo"
}

Just like that we have used curl to make an HTTP Request. The method, or “verb”, curl uses, by default, is GET. The resource, or “noun”, we are requestion is addressed by the URL pointing to the httpkit echo service, http://echo.httpkit.com.

You can add path and query string parameters right to the URL.

Request
curl http://echo.httpkit.com/path?query=string
Response
{ ...
  "uri": "/path?query=string",
  "path": {
    "name": "/path",
    "query": "?query=string",
    "params": {
      "query": "string"
    }
  }, ...
}

Set the Request Method

The curl default HTTP method, GET, can be set to any method you would like using the -X option. The usual suspects POST, PUT, DELETE, and even custom methods, can be specified.

Request
curl -X POST echo.httpkit.com
Response
{
    "method": "POST",
    ...
}

As you can see, the http:// protocol prefix can be dropped with curl because it is assumed by default. Let’s give DELETE a try, too.

Request
curl -X DELETE echo.httpkit.com
Response
{
    "method": "DELETE",
    ...
}

Set Request Headers

Request headers allow clients to provide servers with meta information about things such as authorization, capabilities, and body content-type. OAuth2 uses an Authorization header to pass access tokens, for example. Custom headers are set in curl using the -H option.

Request
curl -H "Authorization: OAuth 2c4419d1aabeec" 

http://echo.httpkit.com

Response
{...
"headers": {
    "host": "echo.httpkit.com",
    "authorization": "OAuth 2c4419d1aabeec",
  ...},
...}

Multiple headers can be set by using the -H option multiple times.

Request
curl -H "Accept: application/json" 
     -H "Authorization: OAuth 2c3455d1aeffc" 

http://echo.httpkit.com

Response
{ ...
  "headers": { ...
    "host": "echo.httpkit.com",
    "accept": "application/json",
    "authorization": "OAuth 2c3455d1aeffc"
   }, ...
}

Send a Request Body

Many popular HTTP APIs today POST and PUT resources using application/json or application/xml rather than in an HTML form data. Let’s try PUTing some JSON data to the server.

Request
curl -X PUT 
     -H 'Content-Type: application/json' 
     -d '{"firstName":"Kris", "lastName":"Jordan"}'
     echo.httpkit.com
Response
{
   "method": "PUT", ...
   "headers": { ...
     "content-type": "application/json",
     "content-length": "40"
   },
   "body": "{"firstName":"Kris","lastName":"Jordan"}",
   ...
 }

Use a File as a Request Body

Escaping JSON/XML at the command line can be a pain and sometimes the body payloads are large files. Luckily, cURL’s @readfile macro makes it easy to read in the contents of a file. If we had the above example’s JSON in a file named “example.json” we could have run it like this, instead:

Request
curl -X PUT 
     -H 'Content-Type: application/json' 
     -d @example.json
     echo.httpkit.com

POST HTML Form Data

Being able to set a custom method, like POST, is of little use if we can’t also send a request body with data. Perhaps we are testing the submission of an HTML form. Using the -d option we can specify URL encoded field names and values.

Request
curl -d "firstName=Kris" 
     -d "lastName=Jordan" 
     echo.httpkit.com
Response
{
  "method": "POST", ...
  "headers": {
    "content-length": "30",
    "content-type":"application/x-www-form-urlencoded"
  },
  "body": "firstName=Kris&lastName=Jordan", ...
}

Notice the method is POST even though we did not specify it. When curl sees form field data it assumes POST. You can override the method using the -X flag discussed above. The “Content-Type” header is also automatically set to “application/x-www-form-urlencoded” so that the web server knows how to parse the content. Finally, the request body is composed by URL encoding each of the form fields.

POST HTML Multipart / File Forms

What about HTML forms with file uploads? As you know from writing HTML file upload form, these use a multipart/form-data Content-Type, with the enctype attribute in HTML. In cURL we can pair the -F option and the @readFile macro covered above.

Request
curl -F "firstName=Kris" 
     -F "publicKey=@idrsa.pub;type=text/plain" 
     echo.httpkit.com
Response
{
  "method": "POST",
  ...
  "headers": {
    "content-length": "697",
    "content-type": "multipart/form-data;
    boundary=----------------------------488327019409",
    ... },
  "body": "------------------------------488327019409rn
           Content-Disposition: form-data;
           name="firstName"rnrn
           Krisrn
           ------------------------------488327019409rn
           Content-Disposition: form-data;
           name="publicKey";
           filename="id_rsa.pub"rn
           Content-Type: text/plainrnrn
           ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAkq1lZYUOJH2
           ... more [a-zA-Z0-9]* ...
           naZXJw== krisjordan@gmail.comnrn
           ------------------------------488327019409
           --rn",
...}

Like with the -d flag, when using -F curl will automatically default to the POST method, the multipart/form-data content-type header, calculate length, and compose the multipart body for you. Notice how the @readFile macro will read the contents of a file into any string, it’s not just a standalone operator. The “;text/plain” specifies the MIME content-type of the file. Left unspecified, curl will attempt to sniff the content-type for you.

Test Virtual Hosts, Avoid DNS

Testing a virtual host or a caching proxy before modifying DNS and without overriding hosts is useful on occassion. With cURL just point the request at your host’s IP address and override the default Host header cURL sets up.

Request
curl -H "Host: google.com" 50.112.251.120
Response
{
  "method": "GET", ...
  "headers": {
    "host": "google.com", ...
  }, ...
}

View Response Headers

APIs are increasingly making use of response headers to provide information on authorization, rate limiting, caching, etc. With cURL you can view the headers and the body using the -i flag.

Request
curl -i echo.httpkit.com
Response
HTTP/1.1 200 OK
Server: nginx/1.1.19
Date: Wed, 29 Aug 2012 04:18:19 GMT
Content-Type: application/json; charset=utf-8
Content-Length: 391
Connection: keep-alive
X-Powered-By: http://httpkit.com

{
  "method": "GET",
  "uri": "/", ...
}

Shameless plug: Do you hack on REST API integrations or implementations? Wiretap is an HTTP debugger you can use to see every request and response between any client and HTTP API in real time. It’s entering private beta soon. Help test it!

on an Ubuntu system (probably Debian too)

$ sudo apt-get install php5-curl

The basic idea behind the cURL functions is that you initialize a cURL session using the curl_init(), then you can set all your options for the transfer via the curl_setopt(), then you can execute the session with the curl_exec() and then you finish off your session using the curl_close(). Here is an example that uses the cURL functions to fetch the example.com homepage into a file:

<?php

$ch = curl_init("http://example.iana.org/");
$fp = fopen("example_homepage.txt", "w");

curl_setopt($ch, CURLOPT_FILE, $fp);
curl_setopt($ch, CURLOPT_HEADER, 0);

curl_exec($ch);
curl_close($ch);
fclose($fp);
?>

Ubuntu lamp

XAMPP is an easy to install Apache distribution containing MySQL, PHP and Perl. XAMPP is really very easy to install and to use – just download, extract and start. This is to help people setup and install a LAMP (Linux-Apache-MySQL-PHP) server in Ubuntu, including Apache 2, PHP 5 and MySQL 4.1 or 5.0. To install […]

XAMPP is an easy to install Apache distribution containing MySQL, PHP and Perl. XAMPP is really very easy to install and to use – just download, extract and start.


This is to help people setup and install a LAMP (Linux-Apache-MySQL-PHP) server in Ubuntu, including Apache 2, PHP 5 and MySQL 4.1 or 5.0.

To install the default LAMP stack in Ubuntu 10.04 and above

First install tasksel…

 

$ sudo apt-get install tasksel

… and then the LAMP stack:

 

$ sudo tasksel install lamp-server

See Tasksel – be warned, only use tasksel to install tasks, not to remove them – see https://launchpad.net/bugs/574287
DO NOT UNCHECK ANY PACKAGES IN THE MENU WHICH APPEARS
You can leave your system in an unusable state.

Starting over: How to remove the LAMP stack

To remove the LAMP stack remove the following packages:

  • Note: This assumes you have no other programs that require any of these packages. You might wish to simulate this removal first, and only remove the packages that don’t cause removal of something desired.

 

apache2 apache2-mpm-prefork apache2-utils apache2.2-common libapache2-mod-php5 libapr1 libaprutil1 libdbd-mysql-perl libdbi-perl libnet-daemon-perl libplrpc-perl libpq5 mysql-client-5.5 mysql-common mysql-server mysql-server-5.5 php5-common php5-mysql

To also remove the debconf data, use the purge option when removing. To get rid of any configurations you may have made to apache, manually remove the /etc/apache2 directory once the packages have been removed.

You may also want to purge these packages:

mysql-client-core-5.5 mysql-server-core-5.5

 

Installing Apache 2

To only install the apache2 webserver, use any method to install:

 

apache2

It requires a restart for it to work:

 

$ sudo /etc/init.d/apache2 restart

or

 

$ sudo service apache2 restart

 

Checking Apache 2 installation

With your web browser, go to the URI http://localhost : if you read “It works!”, which is the content of the file /var/www/index.html , this proves Apache works.

 

Troubleshooting Apache

If you get this error:

apache2: Could not determine the server’s fully qualified domain name, using 127.0.0.1 for ServerName

then use a text editor such as “sudo nano” at the command line or “gksudo gedit” on the desktop to create a new file,

$ sudo nano /etc/apache2/conf.d/fqdn

or

$ gksu "gedit /etc/apache2/conf.d/fqdn"

then add

ServerName localhost

to the file and save. This can all be done in a single command with the following:

$ echo "ServerName localhost" | sudo tee /etc/apache2/conf.d/fqdn

 

Virtual Hosts

Apache2 has the concept of sites, which are separate configuration files that Apache2 will read. These are available in /etc/apache2/sites-available. By default, there is one site available called default this is what you will see when you browse to http://localhost or http://127.0.0.1. You can have many different site configurations available, and activate only those that you need.

As an example, we want the default site to be /home/user/public_html/. To do this, we must create a new site and then enable it in Apache2.

To create a new site:

  • Copy the default website as a starting point. sudo cp /etc/apache2/sites-available/default /etc/apache2/sites-available/mysite 
  • Edit the new configuration file in a text editor “sudo nano” on the command line or “gksudo gedit”, for example: gksudo gedit /etc/apache2/sites-available/mysite
  • Change the DocumentRoot to point to the new location. For example, /home/user/public_html/
  • Change the Directory directive, replace <Directory /var/www/> to <Directory /home/user/public_html/>
  • You can also set separate logs for each site. To do this, change the ErrorLog and CustomLog directives. This is optional, but handy if you have many sites
  • Save the file

Now, we must deactivate the old site, and activate our new one. Ubuntu provides two small utilities that take care of this: a2ensite (apache2enable site) and a2dissite (apache2disable site).

 

$ sudo a2dissite default && sudo a2ensite mysite

Finally, we restart Apache2:

 

$ sudo /etc/init.d/apache2 restart

If you have not created /home/user/public_html/, you will receive an warning message

To test the new site, create a file in /home/user/public_html/:

 

$ echo '<b>Hello! It is working!</b>' > /home/user/public_html/index.html

Finally, browse to http://localhost/

 

Installing PHP 5

To only install PHP5. use any method to install the package

 

libapache2-mod-php5

Enable this module by doing

$ sudo a2enmod php5

which creates a symbolic link /etc/apache2/mods-enabled/php5 pointing to /etc/apache2/mods-availble/php5 .

Except if you use deprecated PHP code beginning only by “<?” instead of “<?php” (which is highly inadvisable), open, as root, the file /etc/php5/apache2/php.ini , look for the line “short_open_tag = On”, change it to “short_open_tag = Off” (not including the quotation marks) and add a line of comment (beginning by a semi-colon) giving the reason, the author and the date of this change. This way, if you later want some XML or XHTML file to be served as PHP, the “<?xml” tag will be ignored by PHP instead of being seen as a PHP code mistake.

Relaunch Apache 2 again:

 

$ sudo service apache2 restart

 

Checking PHP 5 installation

In /var/www , create a text file called “test.php”, grant the world (or, at least, Ubuntu user “apache”) permission to read it, write in it the only line: “<?php phpinfo(); ?>” (without the quotation marks) then, with your web browser, go to the URI “http://localhost/test.php“: if you can see a description of PHP5 configuration, it proves PHP 5 works with Apache.

 

Troubleshooting PHP 5

Does your browser ask if you want to download the php file instead of displaying it? If Apache is not actually parsing the php after you restarted it, install libapache2-mod-php5. It is installed when you install the php5 package, but may have been removed inadvertently by packages which need to run a different version of php.

If sudo a2enmod php5 returns “$ This module does not exist!”, you should purge (not just remove) the libapache2-mod-php5 package and reinstall it.

Be sure to clear your browser’s cache before testing your site again. To do this in Firefox 4: Edit → Preferences … Privacy → History: clear your recent history → Details : choose “Everything” in “Time range to clean” and check only “cache”, then click on “Clear now”.

Remember that, for Apache to be called, the URI in your web browser must begin with “http://“. If it begins with “file://“, then the file is read directly by the browser, without Apache, so you get (X)HTML and CSS, but no PHP. If you didn’t configure any host alias or virtual host, then a local URI begins with “http://localhost“, “http://127.0.0.1” or http://” followed by your IP number.

If the problem persists, check your PHP file authorisations (it should be readable at least by Ubuntu user “apache”), and check if the PHP code is correct. For instance, copy your PHP file, replace your whole PHP file content by “<?php phpinfo(); ?>” (without the quotation marks): if you get the PHP test page in your web browser, then the problem is in your PHP code, not in Apache or PHP configuration nor in file permissions. If this doesn’t work, then it is a problem of file authorisation, Apache or PHP configuration, cache not emptied, or Apache not running or not restarted. Use the display of that test file in your web browser to see the list of files influencing PHP behaviour.

 

php.ini development vs. production

After standard installation, php configuration file /etc/php5/apache2/php.ini is set so as “production settings” which means, among others, that no error messages are displayed. So if you e.g. make a syntax error in your php source file, apache server would return HTTP 500 error instead of displaying the php syntax error debug message.

If you want to debug your scripts, it might be better to use the “development” settings. Both development and production settings ini’s are located in /usr/share/php5/

/usr/share/doc/php5-common/examples/php.ini-development 

/usr/share/php5/php.ini-production

so you can compare them and see the exact differences.

To make the “development” settings active, just backup your original php.ini

sudo mv /etc/php5/apache2/php.ini /etc/php5/apache2/php.ini.bak

and create a symlink to your desired settings:

sudo cp -s /usr/share/doc/php5-common/examples/php.ini-development /etc/php5/apache2/php.ini

or you may of course also edit the /etc/php5/apache2/php.ini directly on your own, if you wish.

 

PHP in user directories

According to this blog, newer versions of Ubuntu do not have PHP enabled by default for user directories (your public_html folder). See the blog for instructions on how to change this back.

 

Installing MYSQL with PHP 5

Use any method to install

 

mysql-server libapache2-mod-auth-mysql php5-mysql

 

After installing PHP

You may need to increase the memory limit that PHP imposes on a script. Edit the /etc/php5/apache2/php.ini file and increase the memory_limit value.

 

After installing MySQL

 

Set mysql bind address

Before you can access the database from other computers in your network, you have to change its bind address. Note that this can be a security problem, because your database can be accessed by other computers than your own. Skip this step if the applications which require mysql are running on the same machine.

type:

$ sudo nano /etc/mysql/my.cnf

and change the line:

bind-address           = localhost

to your own internal ip address e.g. 192.168.1.20

bind-address           = 192.168.1.20

If your ip address is dynamic you can also comment out the bind-address line and it will default to your current ip.

If you try to connect without changing the bind-address you will recieve a “Can not connect to mysql error 10061″.

 

Set mysql root password

Before accessing the database by console you need to type:

$ mysql -u root

At the mysql console type:

$ mysql> SET PASSWORD FOR 'root'@'localhost' = PASSWORD('yourpassword');

A successful mysql command will show:

Query OK, 0 rows affected (0.00 sec)

Mysql commands can span several lines. Do not forget to end your mysql command with a semicolon.

Note: If you have already set a password for the mysql root, you will need to use:

$ mysql -u root -p

(Did you forget the mysql-root password? See MysqlPasswordReset.)

 

Create a mysql database

 

$ mysql> CREATE DATABASE database1;

 

Create a mysql user

For creating a new user with all privileges (use only for troubleshooting), at mysql prompt type:

$ mysql> GRANT ALL PRIVILEGES ON *.* TO 'yourusername'@'localhost' IDENTIFIED BY 'yourpassword' WITH GRANT OPTION;

For creating a new user with fewer privileges (should work for most web applications) which can only use the database named “database1″, at mysql prompt type:

$ mysql> GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, INDEX, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES ON database1.* TO 'yourusername'@'localhost' IDENTIFIED BY 'yourpassword';

yourusername and yourpassword can be anything you like. database1 is the name of the database the user gets access to. localhost is the location which gets access to your database. You can change it to ‘%’ (or to hostnames or ip addresses) to allow connections from every location (or only from specific locations) to the database. Note, that this can be a security problem and should only be used for testing purposes!

To exit the mysql prompt type:

$ mysql> q

Since the mysql root password is now set, if you need to use mysql again (as the mysql root), you will need to use:

$ mysql -u root -p

and then enter the password at the prompt.

 

Backup-Settings

Please, let’s say something in which directories mysql stores the database information and how to configure a backup

 

Alternatively

There is more than just one way to set the mysql root password and create a database. For example mysqladmin can be used:

 

$ mysqladmin -u root -p password yourpassword

and

 

$ mysqladmin -u root -p create database1

mysqladmin is a command-line tool provided by the default LAMP install.

 

Phpmyadmin and mysql-workbench

All mysql tasks including setting the root password and creating databases can be done via a graphical interface using phpmyadmin or mysql-workbench.

To install one or both of them, first enable the universe repository

Use any method to install

 

phpmyadmin

 

Troubleshooting Phpmyadmin & mysql-workbench

If you get blowfish_secret error: Choose and set a phrase for cryptography in the file /etc/phpmyadmin/blowfish_secret.inc.php and copy the line (not the php tags) into the file /etc/phpmyadmin/config.inc.php or you will receive an error.

If you get a 404 error upon visiting http://localhost/phpmyadmin: You will need to configure apache2.conf to work with Phpmyadmin.

 

$ gksudo gedit /etc/apache2/apache2.conf

Include the following line at the bottom of the file, save and quit.

 

$ Include /etc/phpmyadmin/apache.conf

 

Alternative: install phpMyAdmin from source

See the phpMyAdmin page for instructions on how to install phpmyadmin from source:

 

Mysql-workbench

Mysql-workbench runs locally, on the desktop. Use any method to install

 

mysql-workbench

 

For more information

2.9.3. Securing the Initial MySQL Accounts from the MySQL Reference Manual is worth reading.

 

Edit Apache Configuration

You may want your current user to be the PHP pages administrator. To do so, edit the Apache configuration file :

$ gksudo "gedit /etc/apache2/envvars"

Search both the strings starting by “APACHE_RUN_USER” and “APACHE_RUN_GROUP”, and change the names to the current username and groupname you are using. Then you’ll need to restart Apache. (look at the next chapter concerning apache commands)

Configuration options relating specifically to user websites (accessed through localhost/~username) are in /etc/apache2/mods-enabled/userdir.conf.

 

Installing suPHP

suPHP is a tool for executing PHP scripts with the permissions of their owners. It consists of an Apache module (mod_suphp) and a setuid root binary (suphp) that is called by the Apache module to change the uid of the process executing the PHP interpreter.

Note: suPHP enforces, security and helps avoid file permission problems under development environments with several users editing the site files, but it also demands more memory and CPU usage, which can degrade your server performance under certain circumstances.

To only install suPHP. use any method to install the package

 

libapache2-mod-suphp

Enable this module by doing

 

sudo a2enmod suphp

then use a text editor such as “sudo nano” at the command line or “gksudo gedit” on the desktop to edit this file

 

sudo nano /etc/apache2/mods-available/php5.conf

or

 

gksu "gedit /etc/apache2/mods-available/php5.conf"

make a new empty line at the top of the content, then add

 

<Directory /usr/share>

make a new empty line at the bottom of the content, then add

 

</Directory>

save changes

For security reasons we need to specify to suPHP what are the document paths allowed to execute scripts, use a text editor such as “sudo nano” at the command line or “gksudo gedit” on the desktop to edit this file

 

sudo nano /etc/suphp/suphp.conf

or

 

gksu "gedit /etc/suphp/suphp.conf

find the value “docroot” and specify the document path of your site files, for example:

 

docroot=/var/www/

that value restrict script execution only to files inside “/var/www/”

 

docroot=/var/www/:${HOME}/public_html

that value restrict script execution only to files inside a custom home folder for each configured user inside “/var/www/:${HOME}/public_html”

for this tutorial we are going to use this value

 

docroot=/home/user/public_html/

which is the same Apache directory directive set before in this document

save changes

to restart Apache, type in your terminal

 

sudo /etc/init.d/apache2 restart

Now lets create a test script to see if suPHP is working correctly, in your terminal type

 

echo "<?php echo 'whoim = '.exec('/usr/bin/whoami');?>" | tee /home/user/public_html/whomi.php

that command creates a quick php test file to display the current user executing the script

open your browser and navigate to “localhost/whomi.php”, most likely the browser will show you a “500″ server error, this is because suPHP does not allow too permissive file and folder permissions and also does not allow mixed file and folder ownership, to correct this type in your terminal

 

sudo find /home/user/public_html/ -type f -exec chmod 644 {} ;
sudo find /home/user/public_html/ -type d -exec chmod 755 {} ;
sudo chown user:group -R /home/user/public_html/

those commands enforce a secure and correct file and folder permission and also set a correct user and group ownership for all of them

Now open your browser and navigate to “localhost/whomi.php”, if everything went fine you should see the name of the file owner executing the script and not “www-data” unless you specified so

 

Run, Stop, Test, And Restart Apache

Use the following command to run Apache :

$ sudo /usr/sbin/apache2ctl start

To stop it, use :

$ sudo /usr/sbin/apache2ctl stop

To test configuration changes, use :

$ sudo /usr/sbin/apache2ctl configtest

Finally, to restart it, run :

$ sudo /usr/sbin/apache2ctl restart

Alternatively, you can use a graphical interface by installing Rapache or the simpler localhost-indicator.

Using Apache

You can access apache by typing 127.0.0.1 or http://localhost (by default it will be listening on port 80) in your browser address bar. By default the directory for apache server pages is /var/www . It needs root access in order to put files in. A way to do it is just starting the file browser as root in a terminal:

$ gksudo nautilus

or

if you want to make /var/www your own. (Use only for non-production web servers – this is not the most secure way to do things.)

$ sudo chown -R $USER:$USER /var/www

 

Status

To check the status of your PHP installation:

 $ gksudo "gedit /var/www/testphp.php"

and insert the following line

 <?php phpinfo(); ?>

View this page on a web browser at http://yourserveripaddress/testphp.php or http://localhost/testphp.php

 

Securing Apache

If you just want to run your Apache install as a development server and want to prevent it from listening for incoming connection attempts, this is easy to do.

$ gksudo "gedit /etc/apache2/ports.conf"
$ password:

Change ports.conf so that it contains:

Listen 127.0.0.1:80

Save this file, and restart Apache (see above). Now Apache will serve only to your home domain, http://127.0.0.1 or http://localhost.

 

Password-Protect a Directory

There are 2 ways to password-protect a specific directory. The recommended way involves editing  /etc/apache2/apache2.conf . (To do this, you need root access). The other way involves editing a .htaccess file in the directory to be protected. (To do this, you need access to that directory).

 

Password-Protect a Directory With .htaccess

See EnablingUseOfApacheHtaccessFiles

Warning: On at least some versions of Ubuntu, .htaccess files will not work by default. See EnablingUseOfApacheHtaccessFiles for help on enabling them.

 

thumbnails

If you direct your web browser to a directory (rather than a specific file), and there is no “index.html” file in that directory, Apache will generate an index file on-the-fly listing all the files and folders in that directory. Each folder has a little icon of a folder next to it.

To put a thumbnail of that specific image (rather than the generic “image icon”) next to each image file (.jpg, .png, etc.):

… todo: add instructions on how to do thumbnails here, perhaps using Apache::AutoIndex 0.08 or Apache::Album 0.95

 

Known problems

 

Skype incompatibility

Skype uses port 80 for incoming calls, and thus, may block Apache. The solution is to change the port in one of the applications. Usually, port 81 is free and works fine. To change the port number in Skype go to menu Tools > Options, then click on the Advanced tab, then in the box of the port for incoming calls write your preference.

 

Other Apache Options

 

Further Information

Software, sector con gran potencial

Monetizar proyectos y desarrollar soluciones integrales, los retos

Aplicaciones móviles, soluciones empresariales, videojuegos y programas que responden a las necesidades de cada profesión integran el universo del software, una industria que en el 2012 alcanzó en México un valor de 2,300 millones de dólares, 11% más que en el 2011, y que se espera que en el 2013 se convierta en una oportunidad de mercado para jóvenes emprendedores al tener un crecimiento de 10%, de acuerdo con César Longa, gerente del Programa de Software de IDC Latinoamérica.

El analista detalló que los principales crecimientos el año pasado se dieron en el área de desarrollo de aplicaciones para la cadena de suministro y relación con el cliente al registrar 27 y 11%, respectivamente. Estos aplicativos repercutieron en la generación de versiones de software para dispositivos móviles.

En este contexto, Iván Zavala, coordinador de Tecnologías de la Información de la Fundación México-Estados Unidos para la Ciencia (Fumec), comenta que en los últimos años México se ha colocado como uno de los países líderes en materia de Tecnologías de la Información (TI) y, específicamente en software: “El objetivo es convertirnos en una nación exportadora de soluciones tecnológicas, alcanzando ventas por 10,000 millones de dólares anuales”.

Y aunque este propósito no se ha alcanzado, Zavala afirma que el desarrollo de software mexicano va por buen camino y en el proceso de alcanzar esa cifra; el papel de los emprendedores será estratégico, por lo que el gobierno y las instituciones encargadas del desarrollo del sector enfrentan el reto de brindar apoyo tanto a las empresas ya existentes como a los emprendedores interesados en arrancar nuevas unidades económicas que puedan competir en el negocio del software.

“No sólo es volumen, se trata de tener empresas de calidad, la pregunta es ¿cómo nos aseguramos de que las empresas que nacen tengan una oferta competitiva a nivel internacional? Sólo es posible con innovación y ofertas sólidas de negocio”, aseguró Iván Zavala.

Carecen de educación comercial

Si bien la generación de soluciones innovadoras es la tarea principal de los emprendedores del ramo, resulta necesario que fortalezcan sus habilidades en el área comercial: deben desarrollar una visión correcta del mercado a través de análisis, definir el nicho en que se quieren desarrollar, capacitarse para ser los vendedores de su solución y aprender a monetizar sus desarrollos.

De acuerdo con Ricardo Medina, gerente de Vinculación de Microsoft en México, otro desafío importante en el sentido comercial es que los emprendedores piensen en escalar sus soluciones para la venta masiva, pues al inicio tienen ventas pequeñas pero estables y el negocio avanza sin sobresaltos, no obstante, cuando su demanda empieza a aumentar, la empresa no cuenta con los recursos, ni la visión para incrementar su producción de forma segura.

El experto refirió también la necesidad de crear cadenas virtuales de comercialización, en las que el emprendedor se acerque a los gobiernos locales, universidades o representantes de otras industrias para conocer sus demandas y crear soluciones a medida, de forma que antes de tener la solución desarrollada ya tengan comprador.

Movilidad, la punta de lanza

Una de las tendencias más importantes en la industria del software es la movilidad, área en la que Iván Zavala, de Fumec, hizo la aclaración: “No sólo se trata de aplicaciones móviles, tiene que ver con soluciones para realizar actividades fuera de la oficina, sistemas de colaboración remota, para administración de empresas, monitoreo de información y otras acciones productivas.

En este campo, la apuesta de los emprendedores debe ser el desarrollo de soluciones integrales y colaborativas”. A esta opinión se suma la del experto de Microsoft, Ricardo Medina, quien propone a los emprendedores el modelo de App en Brick, que consiste en desarrollar aplicaciones siempre ligadas a negocios físicos, como un complemento a empresas de otras industrias como la restaurantera, turística, etcétera.

“El gran reto en movilidad es la monetización de las aplicaciones, pues el costo de venta, por ejemplo, en aplicaciones móviles es muy bajo y como único ingreso de una compañía no es suficiente, abre la puerta a que se repita el fenómeno de la explosión de la burbuja de Internet cuando las compañías quebraron por no saber cómo hacer rentable su oferta”, explica Ricardo Medina.

marisela.delgado@eleconomista.mx

CRÉDITO: 

Marisela Delgado

Monetizar proyectos y desarrollar soluciones integrales, los retos

Aplicaciones móviles, soluciones empresariales, videojuegos y programas que responden a las necesidades de cada profesión integran el universo del software, una industria que en el 2012 alcanzó en México un valor de 2,300 millones de dólares, 11% más que en el 2011, y que se espera que en el 2013 se convierta en una oportunidad de mercado para jóvenes emprendedores al tener un crecimiento de 10%, de acuerdo con César Longa, gerente del Programa de Software de IDC Latinoamérica.

El analista detalló que los principales crecimientos el año pasado se dieron en el área de desarrollo de aplicaciones para la cadena de suministro y relación con el cliente al registrar 27 y 11%, respectivamente. Estos aplicativos repercutieron en la generación de versiones de software para dispositivos móviles.

En este contexto, Iván Zavala, coordinador de Tecnologías de la Información de la Fundación México-Estados Unidos para la Ciencia (Fumec), comenta que en los últimos años México se ha colocado como uno de los países líderes en materia de Tecnologías de la Información (TI) y, específicamente en software: “El objetivo es convertirnos en una nación exportadora de soluciones tecnológicas, alcanzando ventas por 10,000 millones de dólares anuales”.

Y aunque este propósito no se ha alcanzado, Zavala afirma que el desarrollo de software mexicano va por buen camino y en el proceso de alcanzar esa cifra; el papel de los emprendedores será estratégico, por lo que el gobierno y las instituciones encargadas del desarrollo del sector enfrentan el reto de brindar apoyo tanto a las empresas ya existentes como a los emprendedores interesados en arrancar nuevas unidades económicas que puedan competir en el negocio del software.

“No sólo es volumen, se trata de tener empresas de calidad, la pregunta es ¿cómo nos aseguramos de que las empresas que nacen tengan una oferta competitiva a nivel internacional? Sólo es posible con innovación y ofertas sólidas de negocio”, aseguró Iván Zavala.

Carecen de educación comercial

Si bien la generación de soluciones innovadoras es la tarea principal de los emprendedores del ramo, resulta necesario que fortalezcan sus habilidades en el área comercial: deben desarrollar una visión correcta del mercado a través de análisis, definir el nicho en que se quieren desarrollar, capacitarse para ser los vendedores de su solución y aprender a monetizar sus desarrollos.

De acuerdo con Ricardo Medina, gerente de Vinculación de Microsoft en México, otro desafío importante en el sentido comercial es que los emprendedores piensen en escalar sus soluciones para la venta masiva, pues al inicio tienen ventas pequeñas pero estables y el negocio avanza sin sobresaltos, no obstante, cuando su demanda empieza a aumentar, la empresa no cuenta con los recursos, ni la visión para incrementar su producción de forma segura.

El experto refirió también la necesidad de crear cadenas virtuales de comercialización, en las que el emprendedor se acerque a los gobiernos locales, universidades o representantes de otras industrias para conocer sus demandas y crear soluciones a medida, de forma que antes de tener la solución desarrollada ya tengan comprador.

Movilidad, la punta de lanza

Una de las tendencias más importantes en la industria del software es la movilidad, área en la que Iván Zavala, de Fumec, hizo la aclaración: “No sólo se trata de aplicaciones móviles, tiene que ver con soluciones para realizar actividades fuera de la oficina, sistemas de colaboración remota, para administración de empresas, monitoreo de información y otras acciones productivas.

En este campo, la apuesta de los emprendedores debe ser el desarrollo de soluciones integrales y colaborativas”. A esta opinión se suma la del experto de Microsoft, Ricardo Medina, quien propone a los emprendedores el modelo de App en Brick, que consiste en desarrollar aplicaciones siempre ligadas a negocios físicos, como un complemento a empresas de otras industrias como la restaurantera, turística, etcétera.

“El gran reto en movilidad es la monetización de las aplicaciones, pues el costo de venta, por ejemplo, en aplicaciones móviles es muy bajo y como único ingreso de una compañía no es suficiente, abre la puerta a que se repita el fenómeno de la explosión de la burbuja de Internet cuando las compañías quebraron por no saber cómo hacer rentable su oferta”, explica Ricardo Medina.

marisela.delgado@eleconomista.mx

CRÉDITO: 

Marisela Delgado

Menos duplicidad en banca de desarrollo

Gobierno debe aprovechar su apalancamiento para aumentar créditos hacia las mipymes

El Gobierno federal anunció el martes una nueva meta del nivel de crédito para 2013 por parte de la banca de desarrollo, por un monto de un millón de millones de pesos.

Durante el evento “La Banca de Desarrollo: Avances y Perspectivas”, el presidente Enrique Peña Nieto aseguró que a pesar de estar aún en revisión la reforma financiera, los objetivos planteados para este año deberán alcanzarse cabalmente, tal y como son: el no competir con la banca comercial y ampliar su cobertura para fomentar la inclusión financiera, poniendo al alcance de la mayor parte de la población el financiamiento que por medios tradicionales no pueden obtener.

En este segmento se encuentran las micro, pequeñas y medianas empresas (mipymes); los exportadores de baja escala, los agroindustriales y los productores agropecuarios, entre otros.

Sin embargo, pese a las buenas intenciones en el discurso es importante recordar que la banca de desarrollo tiene muchos más retos que los fijados por el gobierno federal.

Uno de estos desafíos es aprovechar la capacidad de apalancamiento que tiene la banca de desarrollo, sobre todo para favorecer a las empresas medianas que tienen la posibilidad de generar proyectos innovadores y mucho mejores empleos.

De igual forma, por años se ha observado la duplicidad de funciones y tipos de crédito o garantías en las distintas instituciones gubernamentales de este perfil; además de los altos costos de operación que se han visto administración tras administración.

Hoy, una serie de soluciones tecnológicas y una mejor capacitación, acabarían con las numerosas ventanillas, permitirían una operación eficiente y una reducción considerable de gastos operativos.

De la mano de lo anterior, también valdría la pena analizar producto por producto para hacer una oferta mucho más congruente, fácil de difundir y otorgar. Tener tan dispersa la ayuda o programas, encarece los apoyos y dificulta su obtención.

Habrá también que desregular y facilitar el acceso de estos recursos a los empresarios que verdaderamente no tienen otra opción, por no ser sujetos de crédito ante la banca comercial, sobre todo porque hemos visto en el pasado grandes intervenciones por parte del gobierno federal cuando se trata de corporativos o cadenas de autoservicio.

Mejores regiones

El crecimiento regional dependerá también mucho de qué tan buen papel realice en los próximos años la banca de desarrollo, sobre todo ahora que la Secretaría de Economía (SE), a través del Instituto Nacional del Emprendedor (Inadem) está buscando detonar con mayor fuerza las vocaciones de cada estado o lugar.

Esto sólo se logrará con un enfoque integral y productos específicos por industrias, mismos que se deberán realizar en conjunto con los estados y municipios.

Entonces, cada banco de desarrollo deberá encontrar su vocación y hacia qué sectores dirigirse, fortaleciendo su capital, haciendo un uso eficiente de sus recursos presupuestales y generando economías de escala.

Esto, sin duda, se alcanzará si el gobierno también decide implantar al interior de su banca de desarrollo un gobierno corporativo que aumente la eficacia y eficiencia de estos organismos financieros, mejorando la rendición de cuentas y la transparencia.

carmen.castellanos@eleconomista.mx

Twitter: @chucastellanos

CRÉDITO: 

Carmen Castellanos

Gobierno debe aprovechar su apalancamiento para aumentar créditos hacia las mipymes

El Gobierno federal anunció el martes una nueva meta del nivel de crédito para 2013 por parte de la banca de desarrollo, por un monto de un millón de millones de pesos.

Durante el evento “La Banca de Desarrollo: Avances y Perspectivas”, el presidente Enrique Peña Nieto aseguró que a pesar de estar aún en revisión la reforma financiera, los objetivos planteados para este año deberán alcanzarse cabalmente, tal y como son: el no competir con la banca comercial y ampliar su cobertura para fomentar la inclusión financiera, poniendo al alcance de la mayor parte de la población el financiamiento que por medios tradicionales no pueden obtener.

En este segmento se encuentran las micro, pequeñas y medianas empresas (mipymes); los exportadores de baja escala, los agroindustriales y los productores agropecuarios, entre otros.

Sin embargo, pese a las buenas intenciones en el discurso es importante recordar que la banca de desarrollo tiene muchos más retos que los fijados por el gobierno federal.

Uno de estos desafíos es aprovechar la capacidad de apalancamiento que tiene la banca de desarrollo, sobre todo para favorecer a las empresas medianas que tienen la posibilidad de generar proyectos innovadores y mucho mejores empleos.

De igual forma, por años se ha observado la duplicidad de funciones y tipos de crédito o garantías en las distintas instituciones gubernamentales de este perfil; además de los altos costos de operación que se han visto administración tras administración.

Hoy, una serie de soluciones tecnológicas y una mejor capacitación, acabarían con las numerosas ventanillas, permitirían una operación eficiente y una reducción considerable de gastos operativos.

De la mano de lo anterior, también valdría la pena analizar producto por producto para hacer una oferta mucho más congruente, fácil de difundir y otorgar. Tener tan dispersa la ayuda o programas, encarece los apoyos y dificulta su obtención.

Habrá también que desregular y facilitar el acceso de estos recursos a los empresarios que verdaderamente no tienen otra opción, por no ser sujetos de crédito ante la banca comercial, sobre todo porque hemos visto en el pasado grandes intervenciones por parte del gobierno federal cuando se trata de corporativos o cadenas de autoservicio.

Mejores regiones

El crecimiento regional dependerá también mucho de qué tan buen papel realice en los próximos años la banca de desarrollo, sobre todo ahora que la Secretaría de Economía (SE), a través del Instituto Nacional del Emprendedor (Inadem) está buscando detonar con mayor fuerza las vocaciones de cada estado o lugar.

Esto sólo se logrará con un enfoque integral y productos específicos por industrias, mismos que se deberán realizar en conjunto con los estados y municipios.

Entonces, cada banco de desarrollo deberá encontrar su vocación y hacia qué sectores dirigirse, fortaleciendo su capital, haciendo un uso eficiente de sus recursos presupuestales y generando economías de escala.

Esto, sin duda, se alcanzará si el gobierno también decide implantar al interior de su banca de desarrollo un gobierno corporativo que aumente la eficacia y eficiencia de estos organismos financieros, mejorando la rendición de cuentas y la transparencia.

carmen.castellanos@eleconomista.mx

Twitter: @chucastellanos

CRÉDITO: 

Carmen Castellanos

Quick Response Code

QR code (abbreviated from Quick Response Code) is the trademark for a type of matrix barcode (or two-dimensional barcode) first designed for the automotive industry in Japan; a barcode is an optically machine-readable label that is attached to an item and that records information related to that item: The information encoded by a QR code […]

QR code (abbreviated from Quick Response Code) is the trademark for a type of matrix barcode (or two-dimensional barcode) first designed for the automotive industry in Japan; a barcode is an optically machine-readable label that is attached to an item and that records information related to that item: The information encoded by a QR code may be made up of four standardized types (“modes”) of data (numeric, alphanumeric, byte / binary, Kanji) or, through supported extensions, virtually any type of data.[1]

The QR Code system has become popular outside the automotive industry due to its fast readability and greater storage capacity compared to standard UPC barcodes. Applications include product tracking, item identification, time tracking, document management, general marketing, and much more.[2]

A QR code consists of black modules (square dots) arranged in a square grid on a white background, which can be read by an imaging device (such as a camera) and processed using Reed-Solomon error correction until the image can be appropriately interpreted; data is then extracted from patterns present in both horizontal and vertical components of the image.[2]


PHP QR Code is open source (LGPL) library for generating QR Code, 2-dimensional barcode. Based on libqrencode C library, provides API for creating QR Code barcode images (PNG, JPEG thanks to GD2). Implemented purely in PHP, with no external dependencies (except GD2 if needed).

Some of library features includes:

  • Supports QR Code versions (size) 1-40
  • Numeric, Alphanumeric, 8-bit and Kanji encoding. (Kanji encoding was not fully tested, if you are japan-encoding enabled you can contribute by verifing it :) )
  • Implemented purely in PHP, no external dependencies except GD2
  • Exports to PNG, JPEG images, also exports as bit-table
  • TCPDF 2-D barcode API integration
  • Easy to configure
  • Data cache for calculation speed-up
  • Provided merge tool helps deploy library as a one big dependency-less file, simple to “include and do not wory”
  • Debug data dump, error logging, time benchmarking
  • API documentation
  • Detailed examples
  • 100% Open Source, LGPL Licensed

The Eric Python IDE

Eric is a full featured Python and Ruby editor and IDE, written in python. It is based on the cross platform Qt gui toolkit, integrating the highly flexible Scintilla editor control. It is designed to be usable as everdays’ quick and dirty editor as well as being usable as a professional project management tool integrating […]

Eric is a full featured Python and Ruby editor and IDE, written in python. It is based on the cross platform Qt gui toolkit, integrating the highly flexible Scintilla editor control. It is designed to be usable as everdays’ quick and dirty editor as well as being usable as a professional project management tool integrating many advanced features Python offers the professional coder. eric4 includes a plugin system, which allows easy extension of the IDE functionality with plugins downloadable from the net.

Current stable versions are eric4 based on Qt4 and Python 2 and eric5 based on Python 3 and Qt4.


http://ubuntuforums.org/showthread.php?t=1601218

sudo apt-get install libqt4-dev
install python3.2-dev (sudo apt-get install python3.2-dev)
Use Synaptic or Download it from here: http://www.riverbankcomputing.co.uk
1) build/install qscintilla
2) build/install sip
3) build/install PyQt
Python 3.2.3
Qt 4.8.1
PyQt 4.9.1
QScintilla 2.6.1


Python from Scratch

Virtualization

Virtualization, in computing, is a term that refers to the various techniques, methods or approaches of creating a virtual (rather than actual) version of something, such as a virtual hardware platform, operating system (OS), storage device, or network resources. Hardware virtualization or platform virtualization refers to the creation of a virtual machine that acts like […]

Virtualization, in computing, is a term that refers to the various techniques, methods or approaches of creating a virtual (rather than actual) version of something, such as a virtual hardware platform, operating system (OS), storage device, or network resources.

Hardware virtualization or platform virtualization refers to the creation of a virtual machine that acts like a real computer with an operating system. Software executed on these virtual machines is separated from the underlying hardware resources. For example, a computer that is running Microsoft Windows may host a virtual machine that looks like a computer with the Ubuntu Linux operating system; Ubuntu-based software can be run on the virtual machine.[1][2]

In hardware virtualization, the host machine is the actual machine on which the virtualization takes place, and the guest machine is the virtual machine. The words host and guest are used to distinguish the software that runs on the physical machine from the software that runs on the virtual machine. The software or firmware that creates a virtual machine on the host hardware is called a hypervisor or Virtual Machine Manager.

Different types of hardware virtualization include:

  1. Full virtualization: Almost complete simulation of the actual hardware to allow software, which typically consists of a guest operating system, to run unmodified.
  2. Partial virtualization: Some but not all of the target environment is simulated. Some guest programs, therefore, may need modifications to run in this virtual environment.
  3. Paravirtualization: A hardware environment is not simulated; however, the guest programs are executed in their own isolated domains, as if they are running on a separate system. Guest programs need to be specifically modified to run in this environment.

Hardware-assisted virtualization is a way of improving the efficiency of hardware virtualization. It involves employing specially designed CPUs and hardware components that help improve the performance of a guest environment.

Hardware virtualization can be viewed as part of an overall trend in enterprise IT that includes autonomic computing, a scenario in which the IT environment will be able to manage itself based on perceived activity, and utility computing, in which computer processing power is seen as a utility that clients can pay for only as needed. The usual goal of virtualization is to centralize administrative tasks while improving scalability and overall hardware-resource utilization. With virtualization, several operating systems can be run in parallel on a single central processing unit (CPU). This parallelism tends to reduce overhead costs and differs from multitasking, which involves running several programs on the same OS. Using virtualization, an enterprise can better manage updates and rapid changes to the operating system and applications without disrupting the user. “Ultimately, virtualization dramatically improves the efficiency and availability of resources and applications in an organization. Instead of relying on the old model of “one server, one application” that leads to under utilized resource, virtual resources are dynamically applied to meet business needs without any excess fat” (ConsonusTech).

Hardware virtualization is not the same as hardware emulation. In hardware emulation, a piece of hardware imitates another, while in hardware virtualization, a hypervisor (a piece of software) imitates a particular piece of computer hardware or the entire computer. Furthermore, a hypervisor is not the same as an emulator; both are computer programs that imitate hardware, but their domain of use in language differs.

VirtualBox is a general-purpose full virtualizer for x86 hardware, targeted at server, desktop and embedded use.

For a thorough introduction to virtualization and VirtualBox, please refer to the online version of the VirtualBox User Manual’s first chapter.



Why does HP recommend that I keep Hardware Virtualization off?

There are several attack vectors from bad drivers that can utilize VT extensions to do potentially bad things. that’s why the setting is usually in the “security” section of your BIOS UI.

additionally the smaller your instruction set, the more efficient the CPU runs at a very very low level (hence last decades interest in RISC chips). having it disabled allows the CPU to cache fewer instructions and search the cache faster.

http://en.wikipedia.org/wiki/Blue_Pill_%28software%29

So is there a security risk to enabling AMD-V? – Rocket Hazmat Feb 1 at 16:21
yes. Installing drivers and other very-low-level software is always risky, so its probably no more risky that grabbing a driver off a non-official download site. the big difference is that a blue-pill exploit could allow a guest to affect the host and vice-verse, which should really never be true. – Frank Thomas Feb 1 at 16:37
I disagree saying there is a security risk by enabling AMD-V. Doing a quick search on “AMD-V security” results in NO results on the first page about a security vulnerability that says a great deal. – Ramhound Feb 1 at 16:46
So, it’s off by default, because there are rootkits that pretend to by hypervisors? Guess I just gotta be careful what I download! :-) – Rocket Hazmat Feb 1 at 16:49

Blue Pill is the codename for a rootkit based on x86 virtualization. Blue Pill originally required AMD-V (Pacifica) virtualization support, but was later ported to support Intel VT-x (Vanderpool) as well. It was designed by Joanna Rutkowska and originally demonstrated at the Black Hat Briefings on August 3, 2006, with a reference implementation for the Microsoft Windows Vista kernel.