Package resolvconf on ubuntu helped me find the solution how to maintain nameserver on our resolv.conf. Since the configuration always keeps overwriting and it has been changed to default NS ( and it seems to be overwritten by the NetworkManager ☹️ )
Yes, this resolvconf pkg could be the powerful solution.
In this section, we will add some description also about web-based LDAP client phpLDAPadmin. In order to simplify configuration of LDAP. Honestly, this is a ‘deprecated’ article from my post but definitely still fulfill your needs. The contents it self consists of two following parts : 1). Installation and configuration LDAP and phpLDAPadmin, and 2). Integration LDAP into Thunderbird Addressbook for e-mail purposes. Ok, here we are. Ubuntu 16.04 xenial will be our proper Operating system to implement this. It has a bunch of packages we can install, here is parameters to install ldap :
1). Installation and configuration LDAP and phpLDAPadmin
You will be prompt to input : administrator password, DNS Domain Name, Organization Name, Database Backend (BDB, HDB, or MDB can be used). For an instant deployment, install phpldapadmin and .ldif (instead of csv, dsml, vcard) as configuration files like these.
root@ubuntu:/home/tifosilinux# ldapsearch -x
# extended LDIF
# base <dc=hary,dc=tifosilinux,dc=com> (default) with scope subtree
# filter: (objectclass=*)
# requesting: ALL
# admin, hary.tifosilinux.com
description: LDAP administrator
# users, hary.tifosilinux.com
# groups, hary.tifosilinux.com
# admin, users, hary.tifosilinux.com
root@ubuntu:/home/tifosilinux# cat self.ldif
dn: cn=admin, ou=users,dc=hary,dc=tifosilinux,dc=com
cn: Libria Puji
postofficebox: PO Box 17135
l: Jakarta Forest
telephoneNumber: (02) 9451 1144
facsimileTelephoneNumber: (02) 9451 1122
mobile: 0408 239 711
# Edit configuration phpldapadmin
root@ubuntu:/home/tifosilinux# vim /etc/phpldapadmin/config.php
/* Hide the warnings for invalid objectClasses/attributes in templates. */
$config->custom->appearance['hide_template_warning'] = true;
(Unix socket at /usr/local/var/run/ldap) */
/* The port your LDAP server listens on (no quotes). 389 is standard. */
/* Array of base DNs of your LDAP server. Leave this blank to have phpLDAPadmin
auto-detect it for you. */
root@ubuntu:/home/tifosilinux# ldapadd -x -D 'cn=admin,dc=hary,dc=tifosilinux,dc=com' -W -f <filename>.ldif
For our table reference and LDAP attributes Used in Address Book Entries, we are using this information :
CN = Common Name
OU = Organizational Unit
DC = Domain Component
Next we should be able to determine our objective, like do we want to add some Generic: User Account, Generic: Posix Group, Generic: Organisational Unit or others.
2). Integration LDAP into Thunderbird Addressbook for e-mail purposes.
Eventually, this could be very challenging and tough to port those into another technology in our corporate.
This is my only one article about elasticsearch. We’ve known how powerful ELK is, to dig and cultivate the million rows to get insight then visualize it on kibana. So i presume we already know how to install, configure, and monitor all process running. First, we have to make sure and check if elastic service is listening on port 9200.
curl -XGET "http://localhost:9200/_cat/indices"
And secondly, not sure if my /etc/filebeat/filebeat.yml was correctly setup, according to the concept, and fulfill your needs. But instead of using filebeat.input as inputs on filebeat.yml, i’ve already used it also as inputs on logstash.conf.
###################### Filebeat Configuration Example #########################
# This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
# You can find the full configuration reference here:
# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.
#=========================== Filebeat inputs =============================
# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.
- type: log
# Change to true to enable this input configuration.
# Paths that should be crawled and fetched. Glob based paths.
#============================= Filebeat modules ===============================
# Glob pattern for configuration loading
# Set to true to enable config reloading
# Period on which files under path should be checked for changes
#==================== Elasticsearch template setting ==========================
#============================== Kibana =====================================
# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
# Kibana Host
# Scheme and port can be left out and will be set to the default (http and 5601)
# In case you specify and additional path, the scheme is required: http://localhost:5601/path
# IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
# Kibana Space ID
# ID of the Kibana Space into which the dashboards should be loaded. By default,
# the Default Space will be used.
#----------------------------- Logstash output --------------------------------
# The Logstash hosts
# Optional SSL. By default is off.
# List of root certificates for HTTPS server verifications
# Certificate for SSL client authentication
# Client Certificate Key
#================================ Processors =====================================
# Configure processors to enhance or manipulate events generated by the beat.
- add_host_metadata: ~
- add_cloud_metadata: ~
- add_docker_metadata: ~
- add_kubernetes_metadata: ~
You see on the output and setup line that kibana service will listen on localhost:5601 and output will be redirect to output.logstash. Then run it with systemctl start filebeat
Next for my /etc/logstash/logstash.conf configuration, i have used these YML ‘Markup Language’ to parse nginx and csv log with pipeline delimiter.
One thing, if you are ready to process and get more insights from more than a hundred million logs. Increase the heap memory size and just let the elasticsearch.requestTimeout and elasticsearch.shardTimeout default value to “0” on kibana.yml configuration.
In order to run multiple instances, we need different directories to store data and uuid. Then execute these params on every sessions or screen foreach configuration.
Least but not last. The most important things in order to avoid redundancy data and protect against data loss during abnormal termination, remove the line of “start_position from the beginning” then setup persistent queues on your logstash.conf. Then we can put the data bulky include automatic set index and name of _doc and for an adjustment field like how to convert String type to date, we can do this param on Dev Tools – Console:
There are many ways to solve case of sorting data instead of using sort command in Linux/ Unix (although in other case the sort command can save your time). For an example, sort with options -n and -r will be sort a file with numeric data and reverse it (~# sort -nr file_with_numeric.txt), even you could use options -k to defined sort as basis column (~# sort -k 2n file_with_two_column.txt) and -M to sort by month name. But, that is out of our topic in this subject, python will be the progamming language on how to understand which can be the fastest method to fix our wrangling/ munging data. Bubblesort, Insertion, Selection, and Quicksort will be the choice to sorting data efficiently and correctly.
# Python program for implementation of Quicksort Sort
# This function takes last element as pivot, places
# the pivot element at its correct position in sorted
# array, and places all smaller (smaller than pivot)
# to left of pivot and all greater elements to right
# of pivot
start = time.time()
i = ( low-1 ) # index of smaller element
pivot = arr[high] # pivot
for j in range(low , high):
# If current element is smaller than or
# equal to pivot
print('arr[j] & pivot | j = %d & %d | %d \t\t' % (arr[j],pivot,j), end='')
if arr[j] <= pivot:
# increment index of smaller element
i = i+1
print('cond.bf arr[i], arr[j] | i & j = %d & %d | %d & %d \t\t' % (arr[i],arr[j], i, j), end='')
arr[i],arr[j] = arr[j],arr[i]
print('cond.aft arr[i], arr[j] | i & j = %d & %d | %d & %d \t\t' % (arr[i],arr[j], i, j), end='')
print('before arr[i+1], before arr[high] = %d & %d \t\t' % (arr[i+1],arr[high]), end='')
arr[i+1],arr[high] = arr[high],arr[i+1]
print('after arr[i+1], after arr[high] = %d & %d \t\t' % (arr[i+1],arr[high]), end='')
return ( i+1 )
# The main function that implements QuickSort
# arr --> Array to be sorted,
# low --> Starting index,
# high --> Ending index
# Function to do Quick sort
print('kondisi array terkini : ' , arr)
print('low & high : %d & %d \t' % (low,high), end='')
if low < high:
# pi is partitioning index, arr[p] is now
# at right place
pi = partition(arr,low,high)
print('Begin value of element : ', arr, low, high, pi)
# Separately sort elements before
# partition and after partition
quickSort(arr, low, pi-1)
print('End value of element : ', arr, low, pi-1)
quickSort(arr, pi+1, high)
print('Get value of element : ', arr, pi+1, high)
# Driver code to test above
arr = [56,8,88,1,4,3,17,20,3,87]
print('Initiate array : ', arr)
n = len(arr)
print ("Sorted array is:")
for i in range(n):
print ("%d " % arr[i], end='')
end = time.time()
print('Speed time : ',(end-start))
# This code is contributed by Mohit Kumra - modified by HarysMatta
Herewith some capture from my little task about how to use haproxy’s pfsense which adapted from native services of haproxy itself. The main purposes are to avoiding complicated configuration as usual, with pretty user interface (UI) and comfortable user experience (UX). Developed and maintained by netgate, If we’ve installed latest version of pfsense, it has a bunch of feature we can use for our security and needs, such as VPN, Captive Portal, DNS, DHCP, Snort, Zabbix event ssl configuration. Following these step will guide you to accomplish SSL Nginx configuration without set key, chain, chipers and others on Nginx sites-enabled or sites-available configuration.
Choose import an existing certificate then copy paste your chain cert and key into field available (with this steps, we don’t need to use ssl_certificate or ssl_certificate_key on nginx conf then it is enough to use open port 80 instead of 443 and enable ssl on – we do not need it)
Go to Firewall – Rules – WAN to define your destination and port
Go to Firewall – NAT – Port Forward where we have to redirect domain / sub domain ‘with SSL’ to specific machine which has same public IP but different port redirecting to private IP.
Go to Services – HAProxy – Add Frontend (defined by Public IP with 443 port on address field. I.e : 123.456.789.012:443)
Go to Services – HAProxy – Add Backend (defined by Private IP with 80 port on address field on server list. I.e : 10.10.2.26:80)
Here we are
At the end, here is the sample nginx configuration without enable ssl