perl

Perl is a family of high-level, general-purpose, interpreted, dynamic programming languages. The languages in this family include Perl 5 and Perl 6.[4] Though Perl is not officially an acronym,[5] there are various backronyms in use, such as: Practical Extraction and Reporting Language.[6] Perl was originally developed by Larry Wall in 1987 as a general-purpose Unix […]

Perl is a family of high-level, general-purpose, interpreted, dynamic programming languages. The languages in this family include Perl 5 and Perl 6.[4]

Though Perl is not officially an acronym,[5] there are various backronyms in use, such as: Practical Extraction and Reporting Language.[6] Perl was originally developed by Larry Wall in 1987 as a general-purpose Unix scripting language to make report processing easier.[7] Since then, it has undergone many changes and revisions. The latest major stable revision of Perl 5 is 5.18, released in May 2013. Perl 6, which began as a redesign of Perl 5 in 2000, eventually evolved into a separate language. Both languages continue to be developed independently by different development teams and liberally borrow ideas from one another.

The Perl languages borrow features from other programming languages including C, shell scripting (sh), AWK, and sed.[8] They provide powerful text processing facilities without the arbitrary data-length limits of many contemporary Unix tools,[9] facilitating easy manipulation of text files. Perl 5 gained widespread popularity in the late 1990s as a CGI scripting language, in part due to its parsing abilities.[10]

In addition to CGI, Perl 5 is used for graphics programming, system administration, network programming, finance, bioinformatics, and other applications. It’s nicknamed “the Swiss Army chainsaw of scripting languages” because of its flexibility and power,[11] and possibly also because of its perceived “ugliness”.[12] In 1998, it was also referred to as the “duct tape that holds the Internet together”, in reference to its ubiquity and perceived inelegance.[13]

Perl was originally named “Pearl”. Wall wanted to give the language a short name with positive connotations; he claims that he considered (and rejected) every three- and four-letter word in the dictionary. He also considered naming it after his wife Gloria. Wall discovered the existing PEARL programming language before Perl’s official release and changed the spelling of the name.[36]

When referring to the language, the name is normally capitalized (Perl) as a proper noun. When referring to the interpreter program itself, the name is often uncapitalized (perl) because most Unix-like file systems are case-sensitive. Before the release of the first edition of Programming Perl, it was common to refer to the language as perl; Randal L. Schwartz, however, capitalized the language’s name in the book to make it stand out better when typeset. This case distinction was subsequently documented as canonical.[37]

There is some contention about the all-caps spelling “PERL”, which the documentation declares incorrect[37] and which some core community members consider a sign of outsiders.[38] The name is occasionally expanded as Practical Extraction and Report Language, but this is a backronym.[39] Other expansions have been suggested as equally canonical, including Wall’s own humorous Pathologically Eclectic Rubbish Lister.[40] Indeed, Wall claims that the name was intended to inspire many different expansions.[41]

The Comprehensive Perl Archive Network (CPAN) currently has 121,260 Perl modules in 27,769 distributions, written by 10,733 authors, mirrored on 270 servers.

The archive has been online since October 1995 and is constantly growing.

CPAN, the Comprehensive Perl Archive Network, is an archive of over 114,000 modules of software written in the Perl programming language, as well as documentation for them.[1] It has a presence on the World Wide Web at www.cpan.org and is mirrored worldwide at more than 200 locations.[2] CPAN can denote either the archive network itself, or the Perl program that acts as an interface to the network and as an automated software installer (somewhat like a package manager). Most software on CPAN is free and open source software.[3] CPAN was conceived in 1993, and the first web-accessible mirror was launched in January 1997.[4]

Like many programming languages, Perl has mechanisms to use external libraries of code, making one file contain common routines used by several programs. Perl calls these modules. Perl modules are typically installed in one of several directories whose paths are placed in the Perl interpreter when it is first compiled; on Unix-like operating systems, common paths include /usr/lib/perl5, /usr/local/lib/perl5, and several of their subdirectories.

Perl comes with a small set of core modules. Some of these perform bootstrapping tasks, such as ExtUtils::MakeMaker, which is used for building and installing other extension modules; others, like CGI.pm, are merely commonly used. The authors of Perl do not expect this limited group to meet every need, however.

The CPAN’s main purpose is to help programmers locate modules and programs not included in the Perl standard distribution. Its structure is decentralized. Authors maintain and improve their own modules. Forking, and creating competing modules for the same task or purpose is common. There is no formal bug tracking system, but there is a third-party bug tracking system that CPAN designated as the suggested official method of reporting issues with modules. Continuous development on modules is rare; many are abandoned by their authors, or go years between new versions being released. Sometimes a maintainer will be appointed to an abandoned module. They can release new versions of the module, and accept patches from the community to the module as their time permits. CPAN has no revision control system, although the source for the modules is often stored on GitHub. Also, the complete history of the CPAN and all its modules is available as the GitPAN project, allowing to easily see the complete history for all the modules and for easy maintenance of forks. CPAN is also used to distribute new versions of Perl, as well as related projects, such as Parrot.

The CPAN is an important resource for the professional Perl programmer. With over 23,000 modules (containing 20,000,000 lines of code) as of July 2011, the CPAN can save programmers weeks of time, and large Perl programs often make use of dozens of modules. Some of them, such as the DBI family of modules used for interfacing with SQL databases, are nearly irreplaceable in their area of function; others, such as the List::Util module, are simply handy resources containing a few common functions.

Files on the CPAN are referred to as distributions. A distribution may consist of one or more modules, documentation files, or programs packaged in a common archiving format, such as a gzipped tar archive or a ZIP file. Distributions will often contain installation scripts (usually called Makefile.PL or Build.PL) and test scripts which can be run to verify the contents of the distribution are functioning properly. New distributions are uploaded to the Perl Authors Upload Server, or PAUSE (see the section Uploading distributions with PAUSE).

In 2003, distributions started to include metadata files, called META.yml, indicating the distribution’s name, version, dependencies, and other useful information; however, not all distributions contain metadata. When metadata is not present in a distribution, the PAUSE’s software will usually try to analyze the code in the distribution to look for the same information; this is not necessarily very reliable.

With thousands of distributions, CPAN needs to be structured to be useful. Distributions on the CPAN are divided into 24 broad chapters based on their purpose, such as Internationalization and Locale; Archiving, Compression, And Conversion; and Mail and Usenet News. Distributions can also be browsed by author. Finally, the natural hierarchy of Perl module names (such as “Apache::DBI” or “Lingua::EN::Inflect”) can sometimes be used to browse modules in the CPAN.

CPAN module distributions usually have names in the form of CGI-Application-3.1 (where the :: used in the module’s name has been replaced with a dash, and the version number has been appended to the name), but this is only a convention; many prominent distributions break the convention, especially those that contain multiple modules. Security restrictions prevent a distribution from ever being replaced, so virtually all distribution names do include a version number.

There is also a Perl core module named CPAN; it is usually differentiated from the repository itself by using the name CPAN.pm. CPAN.pm is mainly an interactive shell which can be used to search for, download, and install distributions. An interactive shell called cpan is also provided in the Perl core, and is the usual way of running CPAN.pm. After a short configuration process and mirror selection, it uses tools available on the user’s computer to automatically download, unpack, compile, test, and install modules. It is also capable of updating itself.

More recently, an effort to replace CPAN.pm with something cleaner and more modern has resulted in the CPANPLUS (or CPAN++) set of modules. CPANPLUS separates the back-end work of downloading, compiling, and installing modules from the interactive shell used to issue commands. It also supports several advanced features, such as cryptographic signature checking and test result reporting. Finally, CPANPLUS can uninstall a distribution. CPANPLUS was added to the Perl core in version 5.10.0.

Both modules can check a distribution’s dependencies and can be set to recursively install any prerequisites, either automatically or with individual user approval. Both support FTP and HTTP and can work through firewalls and proxies.

Install all dependent packages for CPAN

sudo apt-get install build-essential

Invoke the cpan command as a normal user

cpan

Once you hit on enter for “cpan” to execute, you be asked of some few questions. To make it simple for yourself, answer “no” for the first question so that the latter ones will be done for you automatically.

Enter the commands below

make install
install Bundle::CPAN

Now all is set and you can install any perl module you want.

Type o conf init to reconfigure cpan.

The Best Perl Programmers Use Modern Perl

by chromatic

In 1987, Perl 1.0 changed the world. In the decades since then, the language has grown from a simple tool for system administration somewhere between shell scripting and C programming to a powerful, general purpose language steeped in a rich heritage.

Even so, most Perl 5 programs in the world take far too little advantage of the language. You can write Perl 5 programs as if they were Perl 4 programs (or Perl 3 or 2 or 1), but programs written to take advantage of everything amazing the worldwide Perl 5 community has invented, polished, and discovered are shorter, faster, more powerful, and easier to maintain than their alternatives.

They solve difficult problems with speed and elegance. They take advantage of the CPAN and its unparalleled library of reusable code. They get things done.

This productivity can be yours, whether you’ve dabbled with Perl for a decade or someone just handed you this book and said “Fix this code by Friday.”

Modern Perl is suitable for programmers of every level. It’s more than a Perl tutorial—only Modern Perl focuses on Perl 5.12 and 5.14, to demonstrate the latest and most effective time-saving features. Only Modern Perl explains how and why the language works, to let you unlock the full power of Perl.

Hone your skills. Sharpen your knowledge of the tools and techniques that make Perl so effective. Master everything Perl has to offer.

When you have to solve a problem now, reach for Perl. When you have to solve a problem right, reach for Modern Perl.

Visit the companion website at Modern Perl Books or read Modern Perl: the Book online.

Modern Perl installations include two clients to connect to, search, download, build, test, and install CPAN distributions, CPAN.pm and CPANPLUS. For the most part, each of these clients is equivalent for basic installation. This book recommends the use of CPAN.pm solely due to its ubiquity. With a recent version (as of this writing, 1.9800 is the latest stable release), module installation is reasonably easy. Start the client with:

    $ cpan

To install a distribution within the client:

    $ cpan
    cpan[1]> install Modern::Perl

… or to install directly from the command line:

    $ cpan Modern::Perl

Eric Wilhelm’s tutorial on configuring CPAN.pm http://learnperl.scratchcomputing.com/tutorials/configuration/ includes a great troubleshooting section.

cURL

cURL is a computer software project providing a library and command-line tool for transferring data using various protocols. The cURL project produces two products, libcurl and cURL. It was first released in 1997. curl is a command line tool for transferring data with URL syntax, supporting DICT, FILE, FTP, FTPS, Gopher, HTTP, HTTPS, IMAP, IMAPS, […]

cURL is a computer software project providing a library and command-line tool for transferring data using various protocols. The cURL project produces two products, libcurl and cURL. It was first released in 1997.

curl is a command line tool for transferring data with URL syntax, supporting DICT, FILE, FTP, FTPS, Gopher, HTTP, HTTPS, IMAP, IMAPS, LDAP, LDAPS, POP3, POP3S, RTMP, RTSP, SCP, SFTP, SMTP, SMTPS, Telnet and TFTP. curl supports SSL certificates, HTTP POST, HTTP PUT, FTP uploading, HTTP form based upload, proxies, cookies, user+password authentication (Basic, Digest, NTLM, Negotiate, kerberos…), file transfer resume, proxy tunneling and a busload of other useful tricks.

Working with HTTP from the command-line is a valuable skill for HTTP architects and API designers to have. The cURL library and curl command give you the ability to design a Request, put it on the pipe, and explore the Response. The downside to the power of curl is how much breadth its options cover. Running curl --help spits out 150 different flags and options. This article demonstrates nine basic, real-world applications of curl.

In this tutorial we’ll use the httpkit echo service as our end point. The echo server’s Response is a JSON representation of the HTTP request it receives.

Make a Request

Let’s start with the simplest curl command possible.

Request
curl http://echo.httpkit.com
Response
{
  "method": "GET",
  "uri": "/",
  "path": {
    "name": "/",
    "query": "",
    "params": {}
  },
  "headers": {
    "host": "echo.httpkit.com",
    "user-agent": "curl/7.24.0 ...",
    "accept": "*/*"
  },
  "body": null,
  "ip": "28.169.144.35",
  "powered-by": "http://httpkit.com",
  "docs": "http://httpkit.com/echo"
}

Just like that we have used curl to make an HTTP Request. The method, or “verb”, curl uses, by default, is GET. The resource, or “noun”, we are requestion is addressed by the URL pointing to the httpkit echo service, http://echo.httpkit.com.

You can add path and query string parameters right to the URL.

Request
curl http://echo.httpkit.com/path?query=string
Response
{ ...
  "uri": "/path?query=string",
  "path": {
    "name": "/path",
    "query": "?query=string",
    "params": {
      "query": "string"
    }
  }, ...
}

Set the Request Method

The curl default HTTP method, GET, can be set to any method you would like using the -X option. The usual suspects POST, PUT, DELETE, and even custom methods, can be specified.

Request
curl -X POST echo.httpkit.com
Response
{
    "method": "POST",
    ...
}

As you can see, the http:// protocol prefix can be dropped with curl because it is assumed by default. Let’s give DELETE a try, too.

Request
curl -X DELETE echo.httpkit.com
Response
{
    "method": "DELETE",
    ...
}

Set Request Headers

Request headers allow clients to provide servers with meta information about things such as authorization, capabilities, and body content-type. OAuth2 uses an Authorization header to pass access tokens, for example. Custom headers are set in curl using the -H option.

Request
curl -H "Authorization: OAuth 2c4419d1aabeec" 

http://echo.httpkit.com

Response
{...
"headers": {
    "host": "echo.httpkit.com",
    "authorization": "OAuth 2c4419d1aabeec",
  ...},
...}

Multiple headers can be set by using the -H option multiple times.

Request
curl -H "Accept: application/json" 
     -H "Authorization: OAuth 2c3455d1aeffc" 

http://echo.httpkit.com

Response
{ ...
  "headers": { ...
    "host": "echo.httpkit.com",
    "accept": "application/json",
    "authorization": "OAuth 2c3455d1aeffc"
   }, ...
}

Send a Request Body

Many popular HTTP APIs today POST and PUT resources using application/json or application/xml rather than in an HTML form data. Let’s try PUTing some JSON data to the server.

Request
curl -X PUT 
     -H 'Content-Type: application/json' 
     -d '{"firstName":"Kris", "lastName":"Jordan"}'
     echo.httpkit.com
Response
{
   "method": "PUT", ...
   "headers": { ...
     "content-type": "application/json",
     "content-length": "40"
   },
   "body": "{"firstName":"Kris","lastName":"Jordan"}",
   ...
 }

Use a File as a Request Body

Escaping JSON/XML at the command line can be a pain and sometimes the body payloads are large files. Luckily, cURL’s @readfile macro makes it easy to read in the contents of a file. If we had the above example’s JSON in a file named “example.json” we could have run it like this, instead:

Request
curl -X PUT 
     -H 'Content-Type: application/json' 
     -d @example.json
     echo.httpkit.com

POST HTML Form Data

Being able to set a custom method, like POST, is of little use if we can’t also send a request body with data. Perhaps we are testing the submission of an HTML form. Using the -d option we can specify URL encoded field names and values.

Request
curl -d "firstName=Kris" 
     -d "lastName=Jordan" 
     echo.httpkit.com
Response
{
  "method": "POST", ...
  "headers": {
    "content-length": "30",
    "content-type":"application/x-www-form-urlencoded"
  },
  "body": "firstName=Kris&lastName=Jordan", ...
}

Notice the method is POST even though we did not specify it. When curl sees form field data it assumes POST. You can override the method using the -X flag discussed above. The “Content-Type” header is also automatically set to “application/x-www-form-urlencoded” so that the web server knows how to parse the content. Finally, the request body is composed by URL encoding each of the form fields.

POST HTML Multipart / File Forms

What about HTML forms with file uploads? As you know from writing HTML file upload form, these use a multipart/form-data Content-Type, with the enctype attribute in HTML. In cURL we can pair the -F option and the @readFile macro covered above.

Request
curl -F "firstName=Kris" 
     -F "publicKey=@idrsa.pub;type=text/plain" 
     echo.httpkit.com
Response
{
  "method": "POST",
  ...
  "headers": {
    "content-length": "697",
    "content-type": "multipart/form-data;
    boundary=----------------------------488327019409",
    ... },
  "body": "------------------------------488327019409rn
           Content-Disposition: form-data;
           name="firstName"rnrn
           Krisrn
           ------------------------------488327019409rn
           Content-Disposition: form-data;
           name="publicKey";
           filename="id_rsa.pub"rn
           Content-Type: text/plainrnrn
           ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAkq1lZYUOJH2
           ... more [a-zA-Z0-9]* ...
           naZXJw== krisjordan@gmail.comnrn
           ------------------------------488327019409
           --rn",
...}

Like with the -d flag, when using -F curl will automatically default to the POST method, the multipart/form-data content-type header, calculate length, and compose the multipart body for you. Notice how the @readFile macro will read the contents of a file into any string, it’s not just a standalone operator. The “;text/plain” specifies the MIME content-type of the file. Left unspecified, curl will attempt to sniff the content-type for you.

Test Virtual Hosts, Avoid DNS

Testing a virtual host or a caching proxy before modifying DNS and without overriding hosts is useful on occassion. With cURL just point the request at your host’s IP address and override the default Host header cURL sets up.

Request
curl -H "Host: google.com" 50.112.251.120
Response
{
  "method": "GET", ...
  "headers": {
    "host": "google.com", ...
  }, ...
}

View Response Headers

APIs are increasingly making use of response headers to provide information on authorization, rate limiting, caching, etc. With cURL you can view the headers and the body using the -i flag.

Request
curl -i echo.httpkit.com
Response
HTTP/1.1 200 OK
Server: nginx/1.1.19
Date: Wed, 29 Aug 2012 04:18:19 GMT
Content-Type: application/json; charset=utf-8
Content-Length: 391
Connection: keep-alive
X-Powered-By: http://httpkit.com

{
  "method": "GET",
  "uri": "/", ...
}

Shameless plug: Do you hack on REST API integrations or implementations? Wiretap is an HTTP debugger you can use to see every request and response between any client and HTTP API in real time. It’s entering private beta soon. Help test it!

on an Ubuntu system (probably Debian too)

$ sudo apt-get install php5-curl

The basic idea behind the cURL functions is that you initialize a cURL session using the curl_init(), then you can set all your options for the transfer via the curl_setopt(), then you can execute the session with the curl_exec() and then you finish off your session using the curl_close(). Here is an example that uses the cURL functions to fetch the example.com homepage into a file:

<?php

$ch = curl_init("http://example.iana.org/");
$fp = fopen("example_homepage.txt", "w");

curl_setopt($ch, CURLOPT_FILE, $fp);
curl_setopt($ch, CURLOPT_HEADER, 0);

curl_exec($ch);
curl_close($ch);
fclose($fp);
?>

web servers

Apache, Nginx, Lighttpd December 6, 2010 By Eric Geier By Eric Geier Here are six different web servers freely provided by the open source community for Linux, Windows, and other OSs: Apache HTTP Server Initially released in 1995, this is the most popular web server across the entire World Wide Web, currently used by around […]

Apache, Nginx, Lighttpd

  • December 6, 2010
  • By Eric Geier

By Eric Geier

Here are six different web servers freely provided by the open source community for Linux, Windows, and other OSs:

Apache HTTP Server

Initially released in 1995, this is the most popular web server across the entire World Wide Web, currently used by around 60% of web domains. Its released under an Apache License, which requires preservation of the copyright notices and disclaimers, but doesn’t require modified versions to be distributed using the same license. Though most prevalent on Unix-like operating system, it also runs on Windows, Mac OS X, and others.

Common languages supported by the Apache server include Perl, Python, Tcl, and PHP. The core functionality of the server can be extended with modules to add server-side programming language support, authentication schemes, and other features. Popular authentication modules include mod_access, mod_auth, mod_digest, and mod_auth_digest. Modules are also available for SSL/TLS support (mod_ssl), proxying (mod_proxy), URL rewriting (mod_rewrite), custom logging (mod_log_config), and filtering support (mod_include and mod_ext_filter).

When searching the web you’ll find an endless slew of distributions and packages containing the Apache HTTP server along with other web applications, such as MySQL and PHP, for Linux, Windows, and other OSs. These can make it much easier to install and deploy a feature-rich web server.

Nginx

Nginx (pronounced “engine X”) is the second most popular open source web server currently on the Internet. Though development only started in 2002, its currently used by over 6% of web domains. It is a lightweight HTTP server, and can also serve as a reverse proxy and IMAP/POP3 proxy server. It’s licensed under a BSD-like license. It runs on UNIX, GNU/Linux, BSD, Mac OS X, Solaris, and Windows.

Nginx was built with performance in mind, in particular to handle ten thousand clients simultaneously. Instead of using threads to handle requests, like traditional servers, Nginx uses an event-driven (asynchronous) architecture. Its more scalable and uses less, and more predictable, amounts of memory. In addition to the basic HTTP features, Nginx also supports name-based and IP-based virtual servers, keep-alive and pipelined connections, and FLV streaming. It can also be reconfigured and upgraded online without interruption of the client processing.

Lighttpd

Lighttpd (pronounced “lighty”) is the third most popular open source web server. This lightweight server was initially released in 2003 and currently serves less than 1% of web domains. It’s licensed under a revised BSD license and runs on Unix and Linux.

Like nginux, lighttpd is a lightweight server built for performance with a goal of handling ten thousand clients simultaneously. It also uses an event-driven (asynchronous) architecture.

Cherokee

Cherokee is a full-featured web server with a user friendly configuration GUI, just released in 2010 under the GNU General Public License (GPL). It runs on Linux, Solaris, Mac OS X, and Windows.

Cherokee supports the popular technologies, such as FastCGI, SCGI, PHP, CGI, SSI, and TLS/SSL. It also features virtual host capability, authentication, load balancing, and Apache compatible log files. Plus there are some neat features, such as zero downtime updates where configuration changes can be applied with no restart required and secure downloads with temporal URL generation.

HTTP Explorer

HTTP Explorer is a web server specially designed to serve files over the HTTP protocol. It was released in 2006 under the GNU General Public License (GPL). Its available for Windows in many different languages as a full installation or binary-only.

This server makes it easy to share your photos, music, videos and other files. Using the server application, you can select folders and files to share. You can define user accounts and permissions. Shared files can be access and viewed via the web interface; no client application is required. Photos are automatically shown with thumbnails and music can be played with the integrated player.

HFS HTTP File Server

The HFS web server is for serving files, similar to HTTP Explorer but with a simpler web interface. It was released in 2009 under the GNU General Public License (GPL). It’s a single executable file that can run on 32bit-versions of Windows and in Linux with Wine.

The HFS server lets you and/or your friends easily send, receive, and remotely access files over the web. Files can be downloaded and uploaded to and from the server via the web interface, in addition to using the server application. It’s customizable and features a user account authentication, virtual file system, HTML template, bandwidth controls, logs, and a dynamic DNS updater.

Eric Geier is the founder of NoWiresSecurity, which helps businesses easily protect their Wi-Fi networks with the Enterprise mode of WPA/WPA2 encryption by offering an outsourced RADIUS service. He is also a freelance tech writer, and has authored many networking and computing books for brands like For Dummies and Cisco Press.

Disk utilities

EaseUS Disk Copy Home is a free disk/partition clone software for home users only. Regardless of your operating system, file system and partition scheme, through creating a bootable CD it can sector-by-sector copy you disk to assure you a 100% identical copy of the original one. It is a perfect free tool for Data Recovery […]

EaseUS Disk Copy Home is a free disk/partition clone software for home users only. Regardless of your operating system, file system and partition scheme, through creating a bootable CD it can sector-by-sector copy you disk to assure you a 100% identical copy of the original one. It is a perfect free tool for Data Recovery Wizard to recover files from a backup disk.

EaseUS Disk Copy makes it utterly simple to create a bootable disk for your system on a CD or DVD, USB drive, or ISO image file, and use it to copy or clone disk partitions and recover data and partitions from backups, including sector-by-sector copying for total compatibility. With it, you can perform disk operations that usually require more than one drive (even more than one computer), such as recovering a backup of your main drive.

EaseUS Disk Copy is fully portable, so it runs as soon as you click its program file without having to be installed, even from a USB drive or similar device. The program’s disk wizard is a simple dialog box with three choices for creating a bootable drive, with drop-down lists for multiple destinations: USB, CD/DVD, and Export ISO (you browse to select a destination for an ISO file for further use). We inserted a blank DVD-R into our disk tray, and EaseUS Disk Copy’s built-in burning software recognized it. We selected CD/DVD and pressed Proceed. Immediately the software began analyzing our system and burning our bootable drive. The whole process was finished quickly. We removed the disk and labeled it, since a bootable disk you can’t find or identify doesn’t help much when your system is kaput. We reinserted the disk, rebooted out system, accessed the boot menu, and selected CD-ROM. As it should, our system booted to EaseUS Disk Copy’s menu.

At this point we could choose to continue into Disk Copy, boot from the first hard drive, or select an additional partition to boot from (handy for multi-OS systems). We selected Disk Copy, and the program’s disk copying and cloning wizard opened. This wizard walked us through each step of choosing a disk or partition as well as operations and options. The sector-by-sector option takes more time and uses more space, since it creates a one-for-one clone of your disk.

For a simple, free way to create bootable disks to use with backups and to copy your hard drives and partitions, it’s hard to do better than EaseUS Disk Copy.

Read more: EaseUS Disk Copy Home Edition – CNET Download.com http://download.cnet.com/EaseUS-Disk-Copy-Home-Edition/3000-2242_4-10867157.html#ixzz2UcWTJqM0


G4L is a hard disk and partition imaging and cloning tool. The created images are optionally compressed and transferred to an FTP server or cloned locally. CIFS(Windows), SSHFS and NFS support included, and udpcast and fsarchiver options. .
.
GPT partition support was added in version 0.41.

Backing up Windows partitions requires the use of a bootable G4L CD or running g4l via grub4dos..

G4L Web Site›


Clonezilla is a partition and disk imaging/cloning program similar to Norton Ghost®. It saves and restores only used blocks in hard drive. Two types of Clonezilla are available, Clonezilla live and Clonezilla SE (Server Edition).


Darik’s Boot and Nuke (DBAN) is free erasure software designed for consumer use. DBAN users should be aware of some product limitations, including:
•No guarantee that data is removed
•Limited hardware support (e.g. no RAID dismantling)
•No customer support

DBAN is a self-contained boot disk that automatically deletes the contents of any hard disk that it can detect. This method can help prevent identity theft before recycling a computer. It is also a solution commonly used to remove viruses and spyware from Microsoft Windows installations. DBAN prevents all known techniques of hard disk forensic analysis. It does not provide users with a proof of erasure, such as an audit-ready erasure report.

Professional data erasure tools are recommended for company and organizational users. For secure data erasure with audit-ready reporting, contact Blancco or download a free evaluation license.


Unlocker Portable 1.9.0

File eraser,a freeware to delete stubborn files easily, kill stubborn files.

 

  • Ever had such an annoying message given by Windows?

It has many other flavors:

Cannot delete file: Access is denied
There has been a sharing violation.
The source or destination file may be in use.
The file is in use by another program or user.
Make sure the disk is not full or write-protected and that the file is not currently in use.

 

finnix

Finnix is a self-contained, bootable Linux CD distribution (“LiveCD”) for system administrators, based on Debian. You can mount and manipulate hard drives and partitions, monitor networks, rebuild boot records, install other operating systems, and much more. Finnix includes the latest technology for system administrators, with Linux kernel 3.0, x86 and PowerPC support, hundreds of sysadmin-geared […]

Finnix is a self-contained, bootable Linux CD distribution (“LiveCD”) for system administrators, based on Debian. You can mount and manipulate hard drives and partitions, monitor networks, rebuild boot records, install other operating systems, and much more. Finnix includes the latest technology for system administrators, with Linux kernel 3.0, x86 and PowerPC support, hundreds of sysadmin-geared packages, and much more. And above all, Finnix is small; currently the entire distribution is over 400MiB, but is dynamically compressed into a small bootable image. Finnix is not intended for the average desktop user, and does not include any desktops, productivity tools, or sound support, in order to keep distribution size low.

Digital Forensics

What is odessa? It’s an acronym for “Open Digital Evidence Search and Seizure Architecture” The intent of this project is to provide a completely open and extensible suite of tools for performing digital evidence analysis as well as a means of generating a usable report detailing the analysis and any findings. The odessa tool suite […]

What is odessa?

It’s an acronym for “Open Digital Evidence Search and Seizure Architecture”
The intent of this project is to provide a completely open and extensible suite of tools for performing digital evidence analysis as well as a means of generating a usable report detailing the analysis and any findings. The odessa tool suite currently represents more than 7 man years of labor, and consists of 3 highly modular cross-platform tools for the acquisition, analysis, and documentation of digital evidence.

In addition to the odessa tool suite, the project hosts other applications and information related to digital forensics. At this time, the list of additional tools includes a set of whitepapers and utilities authored by Keith J. Jones including Galleta, a tool for analyzing Internet Explorer cookies, Pasco, a tool for analyzing the Microsoft Windows index.dat file, and Rifiuti, a tool for investigating the Microsoft Windows recycle bin info2 file.

CAINE (Computer Aided INvestigative Environment) is an Italian GNU/Linux live distribution created as a project of Digital Forensics
Currently the project manager is Nanni Bassetti.
CAINE offers a complete forensic environment that is organized to integrate existing software tools as software modules and to provide a friendly graphical interface.
The main design objectives that CAINE aims to guarantee are the following:

  • an interoperable environment that supports the digital investigator during the four phases of the digital investigation
  • a user friendly graphical interface
  • a semi-automated compilation of the final report

We recommend you to read the page on the CAINE policies carefully.
CAINE represents fully the spirit of the Open Source philosophy, because the project is completely open, everyone could take the legacy of the previous developer or project manager. The distro is open source, the Windows side (Wintaylor) is open source and, the last but not the least, the distro is installable, so giving the opportunity to rebuild it in a new brand version, so giving a long life to this project ….

http://linuxzoo.net/page/tut_caine_lab1.html

Information Systems Security

The Open Source Security Testing Methodology http://www.isecom.org/mirror/OSSTMM.3.pdf The Information Systems Security Assessment Framework (ISSAF) seeks to integrate the following management tools and internal control checklists: Evaluate the organizations information security policies & processes to report on their compliance with IT industry standards, and applicable laws and regulatory requirements Identify and assess the business dependencies on […]


The Open Source Security Testing Methodology

http://www.isecom.org/mirror/OSSTMM.3.pdf


The Information Systems Security Assessment Framework (ISSAF) seeks to integrate the following management tools and internal control checklists:

Evaluate the organizations information security policies & processes to report on their compliance with IT industry standards, and applicable laws and regulatory requirements
Identify and assess the business dependencies on infrastructure services provided by IT
Conduct vulnerability assessments & penetration tests to highlight system vulnerabilities that could result in potential risks to information assets
Specify evaluation models by security domains to :
Find mis-configurations and rectify them
Identifying risks related to technologies and addressing them
Identifying risks within people or business processes and addressing them
Strengthening existing processes and technologies
Provide best practices and procedures to support business continuity initiatives

Business Benefits of ISSAF

The ISSAF is intended to comprehensively report on the implementation of existing controls to support IEC/ISO 27001:2005(BS7799), Sarbanes Oxley SOX404, CoBIT, SAS70 and COSO, thus adding value to the operational aspects of IT related business transformation programmes.
Its primary value will derive from the fact that it provides a tested resource for security practitioners thus freeing them up from commensurate investment in commercial resources or extensive internal research to address their information security needs.
It is designed from the ground up to evolve into a comprehensive body of knowledge for organizations seeking independence and neutrality in their security assessment efforts.

It is the first framework to provide validation for bottom up security strategies such as penetration testing as well as top down approaches such as the standardization of an audit checklist for information policies.


The Open Web Application Security Project (OWASP) is an open-source application security project. The OWASP community includes corporations, educational organizations, and individuals from around the world. This community works to create freely-available articles, methodologies, documentation, tools, and technologies. The OWASP Foundation is a 501(c)(3)charitable organization that supports and manages OWASP projects and infrastructure. It is also a registered non profit in Europe since June 2011.

OWASP is not affiliated with any technology company, although it supports the informed use of security technology. OWASP has avoided affiliation as it believes freedom from organizational pressures may make it easier for it to provide unbiased, practical, cost-effective information about application security.[citation needed] OWASP advocates approaching application security by considering the people, process, and technology dimensions.

OWASP’s most successful documents include the book-length OWASP Guide,[1] the OWASP Code Review Guide OWASP Guide [2] and the widely adopted Top 10 awareness document.[3][citation needed] The most widely used OWASP tools include their training environment,[4] their penetration testing proxy WebScarab,[5] and their .NET tools.[6] OWASP includes roughly 190 local chapters [7] around the world and thousands of participants on the project mailing lists. OWASP has organized the AppSec [8] series of conferences to further build the application security community.

OWASP is also an emerging standards body, with the publication of its first standard in December 2008, the OWASP Application Security Verification Standard (ASVS).[9] The primary aim of the OWASP ASVS Project is to normalize the range of coverage and level of rigor available in the market when it comes to performing application-level security verification. The goal is to create a set of commercially workable open standards that are tailored to specific web-based technologies. A Web Application Edition has been published. A Web Service Edition is under development.

the OWASP Top Ten Project – if you’re looking for the OWASP Top 10 Mobile Click Here
The Release Candidate for the OWASP Top 10 for 2013 is now available here: OWASP Top 10 – 2013 – Release Candidate

The OWASP Top 10 – 2013 Release Candidate includes the following changes as compared to the 2010 edition:

  • A1 Injection
  • A2 Broken Authentication and Session Management (was formerly A3)
  • A3 Cross-Site Scripting (XSS) (was formerly A2)
  • A4 Insecure Direct Object References
  • A5 Security Misconfiguration (was formerly A6)
  • A6 Sensitive Data Exposure (merged from former A7 Insecure Cryptographic Storage and former A9 Insufficient Transport Layer Protection)
  • A7 Missing Function Level Access Control (renamed/broadened from former A8 Failure to Restrict URL Access)
  • A8 Cross-Site Request Forgery (CSRF) (was formerly A5)
  • A9 Using Known Vulnerable Components (new but was part of former A6 – Security Misconfiguration)
  • A10 Unvalidated Redirects and Forwards

Please review this release candidate and provide comments to dave.wichers@owasp.org or to the OWASP Top 10 mailing list (which you must be subscribed to). The comment period is open from Feb 16 through March 30, 2013 and a final version will be released in May 2013.

If you are interested, the methodology for how the Top 10 is produced is now documented here: OWASP Top 10 Development Methodology

OWASP Appsec Tutorial Series

Uploaded on Jan 30, 2011
The first episode in the OWASP Appsec Tutorial Series. This episode describes what the series is going to cover, why it is vital to learn about application security, and what to expect in upcoming episodes.

Uploaded on Feb 8, 2011
The second episode in the OWASP Appsec Tutorial Series. This episode describes the #1 attack on the OWASP top 10 – injection attacks. This episode illustrates SQL Injection, discusses other injection attacks, covers basic fixes, and then recommends resources for further learning.

Uploaded on Jul 11, 2011
The third episode in the OWASP Appsec Tutorial Series. This episode describes the #2 attack on the OWASP top 10 – Cross-Site Scripting (XSS). This episode illustrates three version of an XSS attack: high level, detailed with the script tag, and detailed with no script tag, and then recommends resources for further learning.

Published on Sep 24, 2012
The forth episode in the OWASP Appsec Tutorial Series. This episode describes the importance of using HTTPS for all sensitive communication, and how the HTTP Strict Transport Security header can be used to ensure greater security, by transforming all HTTP links to HTTPS automatically in the browser.


DEFT 7 is based on the new Kernel 3 (Linux side) and the DART (Digital Advanced Response Toolkit) with the best freeware Windows Computer Forensic tools. It’s a new concept of Computer Forensic system that use LXDE as desktop environment and WINE for execute Windows tools under Linux and mount manager as tool for device management.

It is a very professional and stable system that includes an excellent hardware detection and the best free and open source applications dedicated to Incident Response, Cyber Intelligence and Computer Forensics.

DEFT is meant to be used by:

Military
Police
Investigators
IT Auditors
Individuals

DEFT is 100% made in Italy

Android SDK

Android software development is the process by which new applications are created for the Android operating system. Applications are usually developed in the Java programming language using the Android Software Development Kit, but other development tools are available. As of October 2012[update], more than 700,000 applications have been developed for Android, with over 25 billion […]

Android software development is the process by which new applications are created for the Android operating system. Applications are usually developed in the Java programming language using the Android Software Development Kit, but other development tools are available. As of October 2012[update], more than 700,000 applications have been developed for Android, with over 25 billion downloads.[2][3] A June 2011 research indicated that over 67% of mobile developers used the platform, at the time of publication.[4] In Q2 2012; around 105 million units of Android smartphones were shipped which acquires a total share of 68% in overall smartphones sale till Q2 2012.[5]

The ADT Bundle provides everything you need to start developing apps, including a version of the Eclipse IDE with built-in ADT (Android Developer Tools) to streamline your Android app development. If you haven’t already, go download the Android ADT Bundle. (If you downloaded the SDK Tools only, for use with an existing IDE, you should instead read Setting Up an Existing IDE.)

Install the SDK and Eclipse IDE

  1. Unpack the ZIP file (named adt-bundle-<os_platform>.zip) and save it to an appropriate location, such as a “Development” directory in your home directory.
  2. Open the adt-bundle-<os_platform>/eclipse/ directory and launch eclipse.

That’s it! The IDE is already loaded with the Android Developer Tools plugin and the SDK is ready to go. To start developing, read Building Your First App.

Caution: Do not move any of the files or directories from the adt-bundle-<os_platform> directory. If you move the eclipse or sdk directory, ADT will not be able to locate the SDK and you’ll need to manually update the ADT preferences.

Additional information

As you continue developing apps, you may need to install additional versions of Android for the emulator and other packages such as the library for Google Play In-app Billing. To install more packages, use the SDK Manager.

Everything you need to develop Android apps is on this web site, including design guidelines, developer training, API reference, and information about how you can distribute your app. For additional resources about developing and distributing your app, see the Developer Support Resources.

There is a community of open-source enthusiasts that build and share Android-based firmware with a number of customizations and additional features, such as FLAC lossless audio support and the ability to store downloaded applications on the microSD card.[42] This usually involves rooting the device. Rooting allows users root access to the operating system, enabling full control of the phone. In order to use custom firmwares the device’s bootloader must be unlocked. Rooting alone does not allow the flashing of custom firmware. Modified firmwares allow users of older phones to use applications available only on newer releases.[43]

Those firmware packages are updated frequently, incorporate elements of Android functionality that haven’t yet been officially released within a carrier-sanctioned firmware, and tend to have fewer limitations. CyanogenMod and OMFGB are examples of such firmware.

On 24 September 2009, Google issued a cease and desist letter[44] to the modder Cyanogen, citing issues with the re-distribution of Google’s closed-source applications[45] within the custom firmware. Even though most of Android OS is open source, phones come packaged with closed-source Google applications for functionality such as the Android Market and GPS navigation. Google has asserted that these applications can only be provided through approved distribution channels by licensed distributors. Cyanogen has complied with Google’s wishes and is continuing to distribute this mod without the proprietary software. He has provided a method to back up licensed Google applications during the mod’s install process and restore them when it is complete.[46]

The NDK is a toolset that allows you to implement parts of your app using native-code languages such as C and C++. For certain types of apps, this can be helpful so you can reuse existing code libraries written in these languages, but most apps do not need the Android NDK.

Before downloading the NDK, you should understand that the NDK will not benefit most apps. As a developer, you need to balance its benefits against its drawbacks. Notably, using native code on Android generally does not result in a noticable performance improvement, but it always increases your app complexity. In general, you should only use the NDK if it is essential to your app—never because you simply prefer to program in C/C++.

Typical good candidates for the NDK are self-contained, CPU-intensive operations that don’t allocate much memory, such as signal processing, physics simulation, and so on. When examining whether or not you should develop in native code, think about your requirements and see if the Android framework APIs provide the functionality that you need.


MobileGo is a life saver for those who love music and video, text a lot and juggle apps on their Android phones and tablets.

Android Fans:Backup everything to PC with 1 click & retain 100% quality.
Music Lovers:Instantly add fun stuff and enjoy media anytime, anywhere.
App Addicts:Download, install, uninstall and export apps quickly and easily.
Socialites:Transfer contacts from/to Outlook and send & reply SMS seamlessly from your PC.
The Android 3.1 platform (also backported to Android 2.3.4) introduces Android Open Accessory support, which allows external USB hardware (an Android USB accessory) to interact with an Android-powered device in a special “accessory” mode. When an Android-powered device is in accessory mode, the connected accessory acts as the USB host (powers the bus and enumerates devices) and the Android-powered device acts as the USB device. Android USB accessories are specifically designed to attach to Android-powered devices and adhere to a simple protocol (Android accessory protocol) that allows them to detect Android-powered devices that support accessory mode.[22]

anonimized run

The Amnesic Incognito Live System or Tails is a Debian based Linux distribution aimed at preserving privacy and anonymity.[1] Actually, it is the next iteration of development on the previous Gentoo based Incognito Linux distribution.[2] All its outgoing connections are forced to go through Tor,[3] and direct (non-anonymous) connections are blocked. The system is designed […]

The Amnesic Incognito Live System or Tails is a Debian based Linux distribution aimed at preserving privacy and anonymity.[1] Actually, it is the next iteration of development on the previous Gentoo based Incognito Linux distribution.[2] All its outgoing connections are forced to go through Tor,[3] and direct (non-anonymous) connections are blocked. The system is designed to be booted as a live CD or USB, and leaves no trace on the machine unless explicitly told to do so. The Tor Project has provided most of the financial support for development.[4]

Tails is a live system that aims at preserving your privacy and anonymity. It helps you to use the Internet anonymously almost anywhere you go and on any computer but leave no trace using unless you ask it explicitly.

It is a complete operating-system designed to be used from a DVD or a USB stick independently of the computer’s original operating system. It is Free Software and based on Debian GNU/Linux.

Tails comes with several built-in applications pre-configured with security in mind: web browser, instant messaging client, email client, office suite, image and sound editor, etc.

Read about how you can help improving Tails documentation.

General information

Get Tails

First steps with Tails

Connect to the Internet anonymously

Encryption & privacy

Work on sensitive documents

Advanced topics

quantOS, based on Linux Mint 11, is a hardened Linux distro for secure daily use. quantOS leverages AppArmor application security profiles, Arkose Desktop Application Sandboxing and Vidalia for creating secure Tor connections for enhanced privacy.


The DemocraKey was invented by Kirk, in response to government snooping and censorship in China and the United States. Six months later, he started DemocraKey.com to promote the DemocraKey and get help with his project.
Read more here.

kaos.theory’s Anonym.OS LiveCD is a bootable live cd based on OpenBSD that provides a hardened operating environment whereby all ingress traffic is denied and all egress traffic is automatically and transparently encrypted and/or anonymized.

Liberté Linux is a secure, lightweight, and easy to use Gentoo-based Linux distribution intended as a communication aid in hostile environments. Liberté installs on a USB key, and boots on any computer or laptop.

Gentoo Linux, a special flavor of Linux that can be automatically optimized and customized for just about any application or need. Extreme performance, configurability and a top-notch user and developer community are all hallmarks of the Gentoo experience. To learn more, read our about page.
Continue reading “anonimized run”

Eclipse, herramienta universal – IDE abierto y extensible

Eclipse: una herramienta profesional al alcance de todos Pese a que Eclipse está escrito en su mayor parte en Java (salvo el núcleo) y que su uso más popular sea como un IDE para Java, Eclipse es neutral y adaptable a cualquier tipo de lenguaje, por ejemplo C/C++, Cobol, C#, XML, etc. La característica clave […]

Eclipse: una herramienta profesional al alcance de todos Pese a que Eclipse está escrito en su mayor parte en Java (salvo el núcleo) y que su uso más popular sea como un IDE para Java, Eclipse es neutral y adaptable a cualquier tipo de lenguaje, por ejemplo C/C++, Cobol, C#, XML, etc. La característica clave de Eclipse es la extensibilidad. Eclipse es una gran estructura formada por un núcleo y muchos plug-ins que van conformando la funcionalidad final. La forma en que los plug-ins interactúan es mediante interfaces o puntos de extensión; así, las nuevas aportaciones se integran sin dificultad ni conflictos.

Eclipse fue producto de una inversión de cuarenta millones de dólares de IBM en su desarrollo antes de ofrecerlo como un producto de código abierto al consorcio Eclipse.org que estaba compuesto inicialmente por Borland e IBM. IBM sigue dirigiendo el desarrollo de Eclipse a través de su subsidiaria OTI (Object Technologies International), creadora de Eclipse. OTI fue adquirida por IBM en 1996 y se consolidó como gran empresa de desarrollo de herramientas orientadas a objeto (O.O.) desde la popularidad del lenguaje Smalltalk. OTI era la división de IBM en la que se generaron los productos Visual Age, que marcaron el estándar de las herramientas de desarrollo Orientado a objetos. Muchos conceptos pioneros en Smalltalk fueron aplicados en Java, creando Visual Age for Java (VA4J). VA4J fue escrito en Smalltalk. Eclipse es una reescritura de VA4J en Java. La base para Eclipse es la Plataforma de cliente enriquecido (del Inglés Rich Client Platform RCP). Los siguientes componentes constituyen la plataforma de cliente enriquecido:

Plataforma principal – inicio de Eclipse, ejecución de plugins OSGi – una plataforma para integrar distribuciones. El Standard Widget Toolkit (SWT) – Un widget toolkit portable. JFace – manejo de archivos, manejo de texto, editores de texto El Workbench de Eclipse – vistas, editores, perspectivas, asistentes

Los widgets de Eclipse están implementados por un herramienta de widget para Java llamada SWT, a diferencia de la mayoría de las aplicaciones Java, que usan las opciones estándar Abstract Window Toolkit (AWT) o Swing. La interfaz de usuario de Eclipse también tiene una capa GUI intermedia llamada JFace, la cual simplifica la construcción de aplicaciones basada en SWT. El entorno integrado de desarrollo (IDE) de Eclipse emplea módulos (plug-in) para proporcionar toda su funcionalidad al frente de la plataforma de cliente rico, a diferencia de otros entornos monolí­ticos donde las funcionalidades están todas incluidas, las necesite el usuario o no. Este mecanismo de módulos es una plataforma ligera para componentes de software. Se provee soporte para Java y CVS en el SDK de Eclipse. En cuanto a las aplicaciones clientes, eclipse provee al programador con frameworks muy ricos para el desarrollo de aplicaciones gráficas, definición y manipulación de modelos de software, aplicaciones web, etc. Por ejemplo, GEF (Graphic Editing Framework – Framework para la edición gráfica) es un plugin de eclipse para el desarrollo de editores visuales que pueden ir desde procesadores de texto wysiwyg hasta editores de diagramas UML, interfaces gráficas para el usuario (GUI), etc. El SDK de Eclipse incluye las herramientas de desarrollo de Java, ofreciendo un IDE con un compilador de Java interno y un modelo completo de los archivos fuente de Java. Esto permite técnicas avanzadas de refactorización y análisis de código. El IDE también hace uso de un espacio de trabajo, en este caso un grupo de metadata en un espacio para archivos plano, permitiendo modificaciones externas a los archivos en tanto se refresque el espacio de trabajo correspondiente. Núcleo: su tarea es determinar cuales son los plug-ins disponibles en el directorio de plug-ins de Eclipse. Cada plug-in tiene un fichero XML manifest que lista los elementos que necesita de otros plug-ins así­ como los puntos de extensión que ofrece. Como la cantidad de plug-ins puede ser muy grande, solo se cargan los necesarios en el momento de ser utilizados con el objeto de minimizar el tiempo de arranque de Eclipse y recursos. Entorno de trabajo: maneja los recursos del usuario, organizados en uno o más proyectos. Cada proyecto corresponde a un directorio en el directorio de trabajo de Eclipse, y contienen archivos y carpetas. Interfaz de usuario: muestra los menús y herramientas, y se organiza en perspectivas que configuran los editores de código y las vistas. A diferencia de muchas aplicaciones escritas en Java, Eclipse tiene el aspecto y se comporta como una aplicación nativa. Esta programada SWT (Standard Widget Toolkit) y Jface (juego de herramientas construida sobre SWT), que emula los gráficos nativos de cada sistema operativo. Este ha sido un aspecto discutido sobre Eclipse, porque SWT debe ser portada a cada sistema operativo para interactuar con el sistema gráfico. En los proyectos de Java puede usarse AWT y Swing salvo cuando se desarrolle un plug-in para Eclipse. Para descargar Eclipse existen distribuciones con diferentes combinaciones de plug-ins dependiendo del uso que se le quiera dar a la herramienta. Un problema que se presenta con estas distribuciones es que en Windows XP el descompresor integrado a veces falla y es preferible usar un programa externo como 7-zip, WinZIP, o info-zip