Image

Ongoing to Understand the Definitive Guide of ..

Every enterprise is powered by data and their application creates data. If they don’t.. that is non sense.

Membangun Aplikasi Web Data Analysis dengan Framework Django

>> Klik Untuk Baca Materinya Disini <<

✖️Beginner
✔️Intermediate
✔️Advanced

Mengenal berbagai macam pustaka di Python
Bismillah. Silahkan kunjungi link berikut.
https://play.google.com/store/books/details?id=tsSvDwAAQBAJ

Semoga bermanfaat 🙂

DNS Resolver Keeps Overwriting

Package resolvconf on ubuntu helped me find the solution how to maintain nameserver on our resolv.conf. Since the configuration always keeps overwriting and it has been changed to default NS ( and it seems to be overwritten by the NetworkManager ☹️ )

Yes, this resolvconf pkg could be the powerful solution.

Lightweight Directory Access Protocol (LDAP) Linux

In this section, we will add some description also about web-based LDAP client phpLDAPadmin. In order to simplify configuration of LDAP. Honestly, this is a ‘deprecated’ article from my post but definitely still fulfill your needs.
The contents it self consists of two following parts : 1). Installation and configuration LDAP and phpLDAPadmin, and 2). Integration LDAP into Thunderbird Addressbook for e-mail purposes.
Ok, here we are. Ubuntu 16.04 xenial will be our proper Operating system to implement this. It has a bunch of packages we can install, here is parameters to install ldap :

1). Installation and configuration LDAP and phpLDAPadmin

root@ubuntu:/home/tifosilinux# apt-get update
root@ubuntu:/home/tifosilinux# apt-get install slapd ldap-utils
root@ubuntu:/home/tifosilinux# slapd -VVV
@(#) $OpenLDAP: slapd  (Ubuntu) (Oct 23 2018 12:47:19) $
        buildd@lgw01-amd64-005:/build/openldap-rkQ3K8/openldap-2.4.42+dfsg/debian/build/servers/slapd

Included static backends:
    config
    ldif

The information above tell us that we are ready to configure ldap and create some configuration then. Reconfigure them:

root@ubuntu:/home/tifosilinux# dpkg-reconfigure slapd

You will be prompt to input : administrator password, DNS Domain Name, Organization Name, Database Backend (BDB, HDB, or MDB can be used). For an instant deployment, install phpldapadmin and .ldif (instead of csv, dsml, vcard) as configuration files like these.

root@ubuntu:/home/tifosilinux# ldapsearch -x
# extended LDIF
#
# LDAPv3
# base <dc=hary,dc=tifosilinux,dc=com> (default) with scope subtree
# filter: (objectclass=*)
# requesting: ALL
#

# hary.tifosilinux.com
dn: dc=hary,dc=tifosilinux,dc=com
objectClass: top
objectClass: dcObject
objectClass: organization
o: telkom
dc: hary

# admin, hary.tifosilinux.com
dn: cn=admin,dc=hary,dc=tifosilinux,dc=com
objectClass: simpleSecurityObject
objectClass: organizationalRole
cn: admin
description: LDAP administrator

# users, hary.tifosilinux.com
dn: ou=users,dc=hary,dc=tifosilinux,dc=com
objectClass: organizationalUnit
objectClass: top
ou: users

# groups, hary.tifosilinux.com
dn: ou=groups,dc=hary,dc=tifosilinux,dc=com
objectClass: organizationalUnit
objectClass: top
ou: groups

# admin, users, hary.tifosilinux.com
dn: cn=admin,ou=users,dc=hary,dc=tifosilinux,dc=com
objectClass: top
objectClass: person
objectClass: inetOrgPerson
objectClass: organizationalPerson
cn: harysmatta
cn: admin
givenName: hary

...

root@ubuntu:/home/tifosilinux# cat self.ldif
dn: cn=admin, ou=users,dc=hary,dc=tifosilinux,dc=com
objectClass: top
objectClass: person
objectClass: inetOrgPerson
objectClass: organizationalPerson
cn: Libria Puji
gn: Bia
sn: Agustiani
userPassword: secret
mail: beeya@yahoo.com
o: telkom
postofficebox: PO Box 17135
l: Jakarta Forest
st: JKT
postalCode: 17135
telephoneNumber: (02) 9451 1144
facsimileTelephoneNumber: (02) 9451 1122
mobile: 0408 239 711

# Edit configuration phpldapadmin
root@ubuntu:/home/tifosilinux# vim /etc/phpldapadmin/config.php
...
/* Hide the warnings for invalid objectClasses/attributes in templates. */
$config->custom->appearance['hide_template_warning'] = true;
...
/* Examples:
   'ldap.example.com',
   'ldaps://ldap.example.com/',
   'ldapi://%2fusr%local%2fvar%2frun%2fldapi'
           (Unix socket at /usr/local/var/run/ldap) */
$servers->setValue('server','host','192.168.75.158');

/* The port your LDAP server listens on (no quotes). 389 is standard. */
// $servers->setValue('server','port',389);

/* Array of base DNs of your LDAP server. Leave this blank to have phpLDAPadmin
   auto-detect it for you. */
#$servers->setValue('server','base',array('dc=example,dc=com'));
$servers->setValue('server','base',array('dc=hary,dc=tifosilinux,dc=com'));
...

root@ubuntu:/home/tifosilinux# ldapadd -x -D 'cn=admin,dc=hary,dc=tifosilinux,dc=com' -W -f <filename>.ldif

For our table reference and LDAP attributes Used in Address Book Entries, we are using this information :

  • CN = Common Name
  • OU = Organizational Unit
  • DC = Domain Component

Next we should be able to determine our objective, like do we want to add some Generic: User Account, Generic: Posix Group, Generic: Organisational Unit or others.


2). Integration LDAP into Thunderbird Addressbook for e-mail purposes.

Eventually, this could be very challenging and tough to port those into another technology in our corporate.

Debugger on ELK

This is my only one article about elasticsearch. We’ve known how powerful ELK is, to dig and cultivate the million rows to get insight then visualize it on kibana. So i presume we already know how to install, configure, and monitor all process running. First, we have to make sure and check if elastic service is listening on port 9200.

curl -XGET "http://localhost:9200/_cat/indices"

And secondly, not sure if my /etc/filebeat/filebeat.yml was correctly setup, according to the concept, and fulfill your needs. But instead of using filebeat.input as inputs on filebeat.yml, i’ve already used it also as inputs on logstash.conf.

###################### Filebeat Configuration Example #########################

# This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html

# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.

#=========================== Filebeat inputs =============================

filebeat.input:

# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.

- type: log

  # Change to true to enable this input configuration.
  enabled: true

  scan_frequency: 3
  tail_files: true

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - /var/log/*.log

  
#============================= Filebeat modules ===============================

filebeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: false

  # Period on which files under path should be checked for changes
  #reload.period: 10s

#==================== Elasticsearch template setting ==========================

setup.template.settings:
  index.number_of_shards: 1
  #index.codec: best_compression
  #_source.enabled: false

#============================== Kibana =====================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  host: "localhost:5601"

  # Kibana Space ID
  # ID of the Kibana Space into which the dashboards should be loaded. By default,
  # the Default Space will be used.
  #space.id:

#----------------------------- Logstash output --------------------------------
output.logstash:
  # The Logstash hosts
   hosts: ["127.0.0.1:5060"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

#================================ Processors =====================================

# Configure processors to enhance or manipulate events generated by the beat.

processors:
  - add_host_metadata: ~
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - add_kubernetes_metadata: ~

You see on the output and setup line that kibana service will listen on localhost:5601 and output will be redirect to output.logstash. Then run it with systemctl start filebeat

Next for my /etc/logstash/logstash.conf configuration, i have used these YML ‘Markup Language’ to parse nginx and csv log with pipeline delimiter.

input {
  file {
    path => "/home/datalog/radius.log"
    sincedb_path => "/dev/null"
    start_position => "beginning"
  }
}


filter {

        #This is grok pattern for parse nginx log file. Use with your own needs.
        #if[type] == "log"{
                #grok{
                        #match => {"message" => "%{IPORHOST:remote_ip} - %{DATA:user_name} \[%{HTTPDATE:access_time}\] \"%{WORD:http_method} %{DATA:url} HTTP/%{NUMBER:http_version}\" %{NUMBER:response_code} %{NUMBER:body_sent_bytes} \"%{DATA:referrer}\" \"%{DATA:agent}\""}
                        #remove_field => "message"
                #}
         #}

         csv    {
                        columns => ["DATE", "USER", "STAT", "REG", "MACADDR", "LOCAP", "DESC",'IPADDR']
                        autodetect_column_names => false
                        autogenerate_column_names => true
                        skip_empty_columns => false
                        skip_empty_rows => false
                        separator => '|'
                 }

        date    {
                        match => ["DATE","yyyyMMddHHmmss"]
                        target => "DATE"
                }

}


output {
  elasticsearch { hosts => ["localhost:9200"] index => "your index name goes here" }
  stdout { codec => rubydebug }
}

See this one : stdout { codec => rubydebug }
It will show log verbosity or more talkative when logstash run with these params:

 ./bin/logstash -f /etc/logstash/logstash.conf

Our nginx sample log:

10.177.100.158 - - [05/Mar/2015:10:23:35 -0500] "GET / HTTP/1.1" 200 432 "-" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:36.0) Gecko/20100101 Firefox/36.0" "1.23"
10.177.100.158 - - [05/Mar/2015:10:23:35 -0500] "GET /favicon.ico HTTP/1.1" 404 391 "-" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:36.0) Gecko/20100101 Firefox/36.0" "1.24"
10.177.100.158 - - [05/Mar/2015:10:23:35 -0500] "GET /favicon.ico HTTP/1.1" 404 391 "-" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:36.0) Gecko/20100101 Firefox/36.0" "1.24"
10.177.100.158 - - [05/Mar/2015:10:36:07 -0500] "GET /tes.html HTTP/1.1" 404 391 "-" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:36.0) Gecko/20100101 Firefox/36.0" "1.24"
10.177.100.158 - - [05/Mar/2015:10:37:27 -0500] "GET /tes.html HTTP/1.1" 404 391 "-" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:36.0) Gecko/20100101 Firefox/36.0" "1.24"

Our csv sample log:

20191201012649|88420006432@sdfsf|success|WAG-D2-JT|b4:3a:28:cb:d7:28||Sukses Login - blank expDate[20191227211429] timeout=86400|10.10.10.5
20191201012649|9813574921432@erere|success|WAG-D5-KBL|38:71:5b:47:d1:24|SITSIT00196/TLK-WI32170615-0001:@tifosilinux|Sukses Login - SITSIT00196/TLK-WI32170615-0001:@tifosilinux expDate[20191207162126] timeout=86400|10.10.12.34
20191201012649|9812306212345@ytegh|success|WAG-D2-CKA|c4:e3:9f:4d:24:dd|JKTCKG00191/03-01AI-At-ASRAMA3:@tifosilinux|Sukses Login - JKTCKG00191/03-01AI-At-ASRAMA3:@tifosilinux expDate[20191204210918] timeout=86400|10.10.4.15

And here we are

One thing, if you are ready to process and get more insights from more than a hundred million logs. Increase the heap memory size and just let the elasticsearch.requestTimeout and elasticsearch.shardTimeout default value to “0” on kibana.yml configuration.

/etc/elasticsearch/jvm.options

In order to run multiple instances, we need different directories to store data and uuid. Then execute these params on every sessions or screen foreach configuration.

/usr/share/logstash/bin/logstash -f /etc/logstash/conf1.conf --path.data /usr/share/logstash/data1/
/usr/share/logstash/bin/logstash -f /etc/logstash/confN.conf --path.data /usr/share/logstash/dataN/

Least but not last. The most important things in order to avoid redundancy data and protect against data loss during abnormal termination, remove the line of “start_position from the beginning” then setup persistent queues on your logstash.conf. Then we can put the data bulky include automatic set index and name of _doc and for an adjustment field like how to convert String type to date, we can do this param on Dev Tools – Console:

curl -XPUT localhost:9200/<index_name>/ppl/_bulk?pretty --data-binary @<filename>.json -H 'Content-Type: application/json'

PUT /eva
{
  "mappings": {
    "my_type": {
      "properties": {
        "initTime": {
          "type": "date" 
        }
      }
    }
  }
}

############ Additional file ############

$a=0;
while($row2 = mysqli_fetch_array($sql))
{
        $a++;
        $myfile = fopen("/home/tifosilinux/<filename>.json", "a") or die("Unable to open file!");

        $result2 = array(
                'TRX_ID' => '"'.$row2['TRX_ID'].'"',
                'T_ID' => $row2['T_ID'],
                'AMOUNT' => $row2['AMOUNT'],
                'NOTUJUAN' => '"'.$row2['NOTUJUAN'].'"',
                'PRODUCT_CODE' => $row2['PRODUCT_CODE'],
                'BILLER' => $row2['BILLER'],
                'TGL_TRX' => date("Y-m-d\TH:i:s",strtotime($row2['TGL_TRX'])),
                'STATUS_TRX' => $row2['STATUS_TRX'],
                'ERROR_CODE' => $row2['ERROR_CODE'],
                'NM_BILLER' => $row2['NM_BILLER'],
                'NM_MR' => $row2['NM_MR'],
                'JENIS_TRX' => $row2['JENIS_TRX'],
        );
        if($a==1){
                $content = '{ "index" : { "_id" : '.$a.' } }'."\n".json_encode($result2,JSON_NUMERIC_CHECK)."\n";
        }else{
                $content = '{ "index" : { "_id" : '.$a.' } }'."\n".json_encode($result2,JSON_NUMERIC_CHECK)."\n";
        }
        fwrite($myfile, $content);
        fclose($myfile);

}


Cheers
-Hary

Sorting Algorithm with Python

There are many ways to solve case of sorting data instead of using sort command in Linux/ Unix (although in other case the sort command can save your time). For an example, sort with options -n and -r will be sort a file with numeric data and reverse it (~# sort -nr file_with_numeric.txt), even you could use options -k to defined sort as basis column (~# sort -k 2n file_with_two_column.txt) and -M to sort by month name. But, that is out of our topic in this subject, python will be the progamming language on how to understand which can be the fastest method to fix our wrangling/ munging data. Bubblesort, Insertion, Selection, and Quicksort will be the choice to sorting data efficiently and correctly.

data_structure_quicksort.py

#!/usr/bin/env python3.5

import time

# Python program for implementation of Quicksort Sort 

# This function takes last element as pivot, places 
# the pivot element at its correct position in sorted 
# array, and places all smaller (smaller than pivot) 
# to left of pivot and all greater elements to right 
# of pivot 

start = time.time()

def partition(arr,low,high):
    i = ( low-1 )        # index of smaller element
    pivot = arr[high]    # pivot
    
    for j in range(low , high): 

        # If current element is smaller than or 
        # equal to pivot
        print('arr[j] & pivot | j = %d & %d | %d \t\t' % (arr[j],pivot,j), end='')
        print()
        if arr[j] <= pivot:
            
            # increment index of smaller element
            i = i+1
            print('cond.bf arr[i], arr[j] | i & j = %d & %d | %d & %d \t\t' % (arr[i],arr[j], i, j), end='')
            arr[i],arr[j] = arr[j],arr[i]
            print('cond.aft arr[i], arr[j] | i & j = %d & %d | %d & %d \t\t' % (arr[i],arr[j], i, j), end='')
            print()

    print('++++++++++++++++++++++++++++++++++++++++++++')
    print('before arr[i+1], before arr[high] = %d & %d \t\t' % (arr[i+1],arr[high]), end='')            
    arr[i+1],arr[high] = arr[high],arr[i+1]
    print('after arr[i+1], after arr[high] = %d & %d \t\t' % (arr[i+1],arr[high]), end='')
    return ( i+1 ) 

# The main function that implements QuickSort 
# arr[] --> Array to be sorted, 
# low --> Starting index, 
# high --> Ending index 

# Function to do Quick sort 
def quickSort(arr,low,high):
    print()
    print('kondisi array terkini : ' , arr)
    print('low & high : %d & %d \t' % (low,high), end='')
    print()
    if low < high:
    
        # pi is partitioning index, arr[p] is now
        # at right place
        pi = partition(arr,low,high)
        print('Begin value of element : ', arr, low, high, pi)
        print()
        # Separately sort elements before
        # partition and after partition
        quickSort(arr, low, pi-1)
        print('End value of element : ', arr, low, pi-1)
        quickSort(arr, pi+1, high)
        print('Get value of element : ', arr, pi+1, high)

# Driver code to test above 
arr = [56,8,88,1,4,3,17,20,3,87]
print('Initiate array : ', arr)
n = len(arr)
quickSort(arr,0,n-1) 

print ("Sorted array is:")
for i in range(n):
    print ("%d " % arr[i], end='')

print()
print()
end = time.time()
print('Speed time : ',(end-start))

# This code is contributed by Mohit Kumra - modified by HarysMatta

Eventually, the rest of code i already keep on my github account. Check these out (on logic directory). https://github.com/Haryjava/python.git

Sense of firewall with pfSense

Herewith some capture from my little task about how to use haproxy’s pfsense which adapted from native services of haproxy itself. The main purposes are to avoiding complicated configuration as usual, with pretty user interface (UI) and comfortable user experience (UX). Developed and maintained by netgate, If we’ve installed latest version of pfsense, it has a bunch of feature we can use for our security and needs, such as VPN, Captive Portal, DNS, DHCP, Snort, Zabbix event ssl configuration.
Following these step will guide you to accomplish SSL Nginx configuration without set key, chain, chipers and others on Nginx sites-enabled or sites-available configuration.

  • we are going to go to  System – Cert. Manager – Certificates to add or sign cert / chain SSL and .key we’ve been used (see https://tifosilinux.wordpress.com/2019/02/25/haproxy-gnu-linux/)
  • Choose import an existing certificate then copy paste your chain cert and key into field available (with this steps, we don’t need to use ssl_certificate or ssl_certificate_key on nginx conf then it is enough to use open port 80 instead of 443 and enable ssl on – we do not need it)
  • Go to Firewall – Rules – WAN to define your destination and port
  • Go to Firewall – NAT – Port Forward where we have to redirect domain / sub domain ‘with SSL’ to specific machine which has same public IP but different port redirecting to private IP. 
  • Go to Services – HAProxy – Add Frontend (defined by Public IP with 443 port on address field. I.e : 123.456.789.012:443)
  • Go to Services – HAProxy – Add Backend (defined by Private IP with 80 port on address field on server list. I.e : 10.10.2.26:80)

Here we are

At the end, here is the sample nginx configuration without enable ssl

server {
    listen 80;
    index index.php index.html index.htm;

        set $root_path '/var/www/html/';
        root $root_path;

    server_name 10.10.2.26;

        location / {
                 set $root_path "$root_path/cms-merchant-biller";
                 try_files $uri $uri/ @up_op_rewrite;
         }

        location @up_op_rewrite {
                 rewrite ^/report-ppob/(.*)$ /index.php?_url=/$1;
         }

    location ~ \.php$ {
        try_files $uri =404;
        fastcgi_split_path_info ^(.+\.php)(/.+)$;
        fastcgi_pass unix:/var/run/php/php7.2-fpm.sock;
        fastcgi_index index.php;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        include fastcgi_params;
    }


}