Programs & Examples On #Miter

I am receiving warning in Facebook Application using PHP SDK

You need to ensure that any code that modifies the HTTP headers is executed before the headers are sent. This includes statements like session_start(). The headers will be sent automatically when any HTML is output.

Your problem here is that you're sending the HTML ouput at the top of your page before you've executed any PHP at all.

Move the session_start() to the top of your document :

<?php    session_start(); ?> <html> <head> <title>PHP SDK</title> </head> <body> <?php require_once 'src/facebook.php';    // more PHP code here. 

How do I hide the PHP explode delimiter from submitted form results?

You could try a different approach like read the file line by line instead of dealing with all this nl2br / explode stuff.

$fh = fopen("employees.txt", "r"); if ($fh) {     while (($line = fgets($fh)) !== false) {         $line = trim($line);         echo "<option value='".$line."'>".$line."</option>";     } } else {     // error opening the file, do something } 

Also maybe just doing a trim (remove whitespace from beginning/end of string) is your issue?

And maybe people are just misunderstanding what you mean by "submitting results to a spreadsheet" -- are you doing this with code? or a copy/paste from an HTML page into a spreadsheet? Maybe you can explain that in more detail. The delimiter for which you split the lines of the file shouldn't be displaying in the output anyway unless you have unexpected output for some other reason.

Provide schema while reading csv file as a dataframe

You can also do like this by using sparkSession and implicit

import sparkSession.implicits._
val pagecount:DataFrame = sparkSession.read
.option("delimiter"," ")
.option("quote","")
.option("inferSchema","true")
.csv("dbfs:/databricks-datasets/wikipedia-datasets/data-001/pagecounts/sample/pagecounts-20151124-170000")
.toDF("project","article","requests","bytes_served")

Listing files in a specific "folder" of a AWS S3 bucket

S3 does not have directories, while you can list files in a pseudo directory manner like you demonstrated, there is no directory "file" per-se.
You may of inadvertently created a data file called users/<user-id>/contacts/<contact-id>/.

Key error when selecting columns in pandas dataframe after read_csv

The key error generally comes if the key doesn't match any of the dataframe column name 'exactly':

You could also try:

import csv
import pandas as pd
import re
    with open (filename, "r") as file:
        df = pd.read_csv(file, delimiter = ",")
        df.columns = ((df.columns.str).replace("^ ","")).str.replace(" $","")
        print(df.columns)

Retrieving subfolders names in S3 bucket from boto3

I had the same issue but managed to resolve it using boto3.client and list_objects_v2 with Bucket and StartAfter parameters.

s3client = boto3.client('s3')
bucket = 'my-bucket-name'
startAfter = 'firstlevelFolder/secondLevelFolder'

theobjects = s3client.list_objects_v2(Bucket=bucket, StartAfter=startAfter )
for object in theobjects['Contents']:
    print object['Key']

The output result for the code above would display the following:

firstlevelFolder/secondLevelFolder/item1
firstlevelFolder/secondLevelFolder/item2

Boto3 list_objects_v2 Documentation

In order to strip out only the directory name for secondLevelFolder I just used python method split():

s3client = boto3.client('s3')
bucket = 'my-bucket-name'
startAfter = 'firstlevelFolder/secondLevelFolder'

theobjects = s3client.list_objects_v2(Bucket=bucket, StartAfter=startAfter )
for object in theobjects['Contents']:
    direcoryName = object['Key'].encode("string_escape").split('/')
    print direcoryName[1]

The output result for the code above would display the following:

secondLevelFolder
secondLevelFolder

Python split() Documentation

If you'd like to get the directory name AND contents item name then replace the print line with the following:

print "{}/{}".format(fileName[1], fileName[2])

And the following will be output:

secondLevelFolder/item2
secondLevelFolder/item2

Hope this helps

How to change dataframe column names in pyspark?

df = df.withColumnRenamed("colName", "newColName")\
       .withColumnRenamed("colName2", "newColName2")

Advantage of using this way: With long list of columns you would like to change only few column names. This can be very convenient in these scenarios. Very useful when joining tables with duplicate column names.

There is no argument given that corresponds to the required formal parameter - .NET Error

I received this same error in the following Linq statement regarding DailyReport. The problem was that DailyReport had no default constructor. Apparently, it instantiates the object before populating the properties.

var sums = reports
    .GroupBy(r => r.CountryRegion)
    .Select(cr => new DailyReport
    {
        CountryRegion = cr.Key,
        ProvinceState = "All",
        RecordDate = cr.First().RecordDate,
        Confirmed = cr.Sum(c => c.Confirmed),
        Recovered = cr.Sum(c => c.Recovered),
        Deaths = cr.Sum(c => c.Deaths)
    });

TypeError: list indices must be integers or slices, not str

First, array_length should be an integer and not a string:

array_length = len(array_dates)

Second, your for loop should be constructed using range:

for i in range(array_length):  # Use `xrange` for python 2.

Third, i will increment automatically, so delete the following line:

i += 1

Note, one could also just zip the two lists given that they have the same length:

import csv

dates = ['2020-01-01', '2020-01-02', '2020-01-03']
urls = ['www.abc.com', 'www.cnn.com', 'www.nbc.com']

csv_file_patch = '/path/to/filename.csv'

with open(csv_file_patch, 'w') as fout:
    csv_file = csv.writer(fout, delimiter=';', lineterminator='\n')
    result_array = zip(dates, urls)
    csv_file.writerows(result_array)

#1227 - Access denied; you need (at least one of) the SUPER privilege(s) for this operation

remove DEFINER=root@localhost from all calls including procedures

How do I use a delimiter with Scanner.useDelimiter in Java?

With Scanner the default delimiters are the whitespace characters.

But Scanner can define where a token starts and ends based on a set of delimiter, wich could be specified in two ways:

  1. Using the Scanner method: useDelimiter(String pattern)
  2. Using the Scanner method : useDelimiter(Pattern pattern) where Pattern is a regular expression that specifies the delimiter set.

So useDelimiter() methods are used to tokenize the Scanner input, and behave like StringTokenizer class, take a look at these tutorials for further information:

And here is an Example:

public static void main(String[] args) {

    // Initialize Scanner object
    Scanner scan = new Scanner("Anna Mills/Female/18");
    // initialize the string delimiter
    scan.useDelimiter("/");
    // Printing the tokenized Strings
    while(scan.hasNext()){
        System.out.println(scan.next());
    }
    // closing the scanner stream
    scan.close();
}

Prints this output:

Anna Mills
Female
18

Split function in oracle to comma separated values with automatic sequence

Here is how you could create such a table:

 SELECT LEVEL AS id, REGEXP_SUBSTR('A,B,C,D', '[^,]+', 1, LEVEL) AS data
   FROM dual
CONNECT BY REGEXP_SUBSTR('A,B,C,D', '[^,]+', 1, LEVEL) IS NOT NULL;

With a little bit of tweaking (i.e., replacing the , in [^,] with a variable) you could write such a function to return a table.

How can I solve ORA-00911: invalid character error?

I encountered the same thing lately. it was just due to spaces when copying a script from a document to sql developer. I had to remove the spaces and the script ran.

Split String by delimiter position using oracle SQL

You want to use regexp_substr() for this. This should work for your example:

select regexp_substr(val, '[^/]+/[^/]+', 1, 1) as part1,
       regexp_substr(val, '[^/]+$', 1, 1) as part2
from (select 'F/P/O' as val from dual) t

Here, by the way, is the SQL Fiddle.

Oops. I missed the part of the question where it says the last delimiter. For that, we can use regex_replace() for the first part:

select regexp_replace(val, '/[^/]+$', '', 1, 1) as part1,
       regexp_substr(val, '[^/]+$', 1, 1) as part2
from (select 'F/P/O' as val from dual) t

And here is this corresponding SQL Fiddle.

Maven:Failed to execute goal org.apache.maven.plugins:maven-resources-plugin:2.7:resources

With:

        <plugin>
          <groupId>org.apache.maven.plugins</groupId>
            <artifactId>maven-resources-plugin</artifactId>
            <version>2.7</version>
        </plugin>

Was getting the following exception:

...
Caused by: org.apache.maven.plugin.MojoExecutionException: Mark invalid
    at org.apache.maven.plugin.resources.ResourcesMojo.execute(ResourcesMojo.java:306)
    at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:132)
    at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:208)
    ... 25 more
Caused by: org.apache.maven.shared.filtering.MavenFilteringException: Mark invalid
    at org.apache.maven.shared.filtering.DefaultMavenFileFilter.copyFile(DefaultMavenFileFilter.java:129)
    at org.apache.maven.shared.filtering.DefaultMavenResourcesFiltering.filterResources(DefaultMavenResourcesFiltering.java:264)
    at org.apache.maven.plugin.resources.ResourcesMojo.execute(ResourcesMojo.java:300)
    ... 27 more
Caused by: java.io.IOException: Mark invalid
    at java.io.BufferedReader.reset(BufferedReader.java:505)
    at org.apache.maven.shared.filtering.MultiDelimiterInterpolatorFilterReaderLineEnding.read(MultiDelimiterInterpolatorFilterReaderLineEnding.java:416)
    at org.apache.maven.shared.filtering.MultiDelimiterInterpolatorFilterReaderLineEnding.read(MultiDelimiterInterpolatorFilterReaderLineEnding.java:205)
    at java.io.Reader.read(Reader.java:140)
    at org.apache.maven.shared.utils.io.IOUtil.copy(IOUtil.java:181)
    at org.apache.maven.shared.utils.io.IOUtil.copy(IOUtil.java:168)
    at org.apache.maven.shared.utils.io.FileUtils.copyFile(FileUtils.java:1856)
    at org.apache.maven.shared.utils.io.FileUtils.copyFile(FileUtils.java:1804)
    at org.apache.maven.shared.filtering.DefaultMavenFileFilter.copyFile(DefaultMavenFileFilter.java:114)
    ... 29 more



Then it is gone after adding maven-filtering 1.3:

        <plugin>
          <groupId>org.apache.maven.plugins</groupId>
            <artifactId>maven-resources-plugin</artifactId>
            <version>2.7</version>
          <dependencies>
            <dependency>
                <groupId>org.apache.maven.shared</groupId>
                <artifactId>maven-filtering</artifactId>
                <version>1.3</version>
            </dependency>
          </dependencies>
        </plugin>

Adding double quote delimiters into csv file

In Excel for Mac at least, you can do this by saving as "CSV for MS DOS" which adds double quotes for any field which needs them.

MySQL: ALTER TABLE if column not exists

SET @dbname = DATABASE();
SET @tablename = "table";
SET @columnname = "fieldname";
SET @preparedStatement = (SELECT IF(
  (
    SELECT COUNT(*) FROM INFORMATION_SCHEMA.COLUMNS
    WHERE
      (table_name = @tablename)
      AND (table_schema = @dbname)
      AND (column_name = @columnname)
  ) > 0,
  "SELECT 1",
  CONCAT("ALTER TABLE ", @tablename, " ADD ", @columnname, " DECIMAL(18,4) NULL;")
));
PREPARE alterIfNotExists FROM @preparedStatement;
EXECUTE alterIfNotExists;
DEALLOCATE PREPARE alterIfNotExists; 

Split string with string as delimiter

I've found two older scripts that use an indefinite or even a specific string to split. As an approach, these are always helpful.

https://www.administrator.de/contentid/226533#comment-1059704 https://www.administrator.de/contentid/267522#comment-1000886

@echo off
:noOption
if "%~1" neq "" goto :nohelp
echo Gibt eine Ausgabe bis zur angebenen Zeichenfolge&echo(
echo %~n0 ist mit Eingabeumleitung zu nutzen
echo %~n0 "Zeichenfolge" ^<Quelldatei [^>Zieldatei]&echo(
echo    Zeichenfolge    die zu suchende Zeichenfolge wird mit FIND bestimmt
echo            ohne AusgabeUmleitung Ausgabe im CMD Fenster
exit /b
:nohelp
setlocal disabledelayedexpansion
set "intemp=%temp%%time::=%"
set "string=%~1"
set "stringlength=0"
:Laenge string bestimmen
for /f eol^=^

^ delims^= %%i in (' cmd /u /von /c "echo(!string!"^|find /v "" ') do set /a Stringlength += 1

:Eingabe temporär speichern
>"%intemp%" find /n /v ""

:suchen der Zeichenfolge und Zeile bestimmen und speichen
set "NRout="
for /f "tokens=*delims=" %%a in (' find "%string%"^<"%intemp%" ') do if not defined NRout (set "LineStr=%%a"
  for /f "delims=[]" %%b in ("%%a") do set "NRout=%%b"
)
if not defined NRout >&2 echo Zeichenfolge nicht gefunden.& set /a xcode=1 &goto :end
if %NRout% gtr 1 call :Line
call :LineStr

:end
del "%intemp%"
exit /b %xcode%

:LineStr Suche nur jeden ersten Buchstaben des Strings in der Treffer-Zeile dann Ausgabe bis dahin
for /f eol^=^

^ delims^= %%a in ('cmd /u /von /c "echo(!String!"^|findstr .') do (
  for /f "delims=[]" %%i in (' cmd /u /von /c "echo(!LineStr!"^|find /n "%%a" ') do (
    setlocal enabledelayedexpansion
    for /f %%n in ('set /a %%i-1') do if !LineStr:^~%%n^,%stringlength%! equ !string! (
      set "Lineout=!LineStr:~0,%%n!!string!"
      echo(!Lineout:*]=!
      exit /b
    )
) )
exit /b 

:Line vorige Zeilen ausgeben
for /f "usebackq tokens=* delims=" %%i in ("%intemp%") do (
  for /f "tokens=1*delims=[]" %%n in ("%%i") do if %%n EQU %NRout%  exit /b
  set "Line=%%i"
  setlocal enabledelayedexpansion 
  echo(!Line:*]=!
  endlocal
)
exit /b

@echo off
:: CUTwithWildcards.cmd
:noOption
if "%~1" neq "" goto :nohelp
echo Gibt eine Ausgabe ohne die angebene Zeichenfolge.
echo Der Rest wird abgeschnitten.&echo(
echo %~n0 "Zeichenfolge" B n E [/i] &echo(
echo    Zeichenfolge    String zum Durchsuchen
echo    B   Zeichen Wonach am Anfang gesucht wird
echo    n   Auszulassende Zeichenanzahl
echo    E   Zeichen was das Ende der Zeichen Bestimmt
echo    /i  Case intensive
exit /b
:nohelp
setlocal disabledelayedexpansion
set  "Original=%~1"
set     "Begin=%~2"
set /a    Excl=%~3 ||echo Syntaxfehler.>&2 &&exit /b 1
set       "End=%~4"
if not defined end echo Syntaxfehler.>&2 &exit /b 1
set   "CaseInt=%~5"
:: end Setting Input Param
set       "out="
set      "more="
call :read Original
if errorlevel 1 echo Zeichenfolge nicht gefunden.>&2
exit /b
:read VarName B # E [/i]
for /f "delims=[]" %%a in (' cmd /u /von /c "echo  !%~1!"^|find /n %CaseInt% "%Begin%" ') do (
  if defined out exit /b 0
  for /f "delims=[]" %%b in (' cmd /u /von /c "echo !%1!"^|more +%Excl%^|find /n %CaseInt% "%End%"^|find "[%%a]" ') do (
    set "out=1"
    setlocal enabledelayedexpansion
    set "In=  !Original!"
    set "In=!In:~,%%a!"
    echo !In:^~2!
    endlocal
) )
if not defined out exit /b 1 
exit /b

::oneliner for CMDLine
set "Dq=""
for %i in ("*S??E*") do @set "out=1" &for /f "delims=[]" %a in ('cmd/u/c "echo  %i"^|find /n "S"') do @if defined out for /f "delims=[]" %b in ('cmd/u/c "echo %i"^|more +2^|find /n "E"^|find "[%a]"') do @if %a equ %b set "out=" & set in= "%i" &cmd /v/c echo ren "%i" !in:^~0^,%a!!Dq!)

Create hive table using "as select" or "like" and also specify delimiter

Create Table as select (CTAS) is possible in Hive.

You can try out below command:

CREATE TABLE new_test 
    row format delimited 
    fields terminated by '|' 
    STORED AS RCFile 
AS select * from source where col=1
  1. Target cannot be partitioned table.
  2. Target cannot be external table.
  3. It copies the structure as well as the data

Create table like is also possible in Hive.

  1. It just copies the source table definition.

Split string using a newline delimiter with Python

data = """a,b,c
d,e,f
g,h,i
j,k,l"""

print(data.split())       # ['a,b,c', 'd,e,f', 'g,h,i', 'j,k,l']

str.split, by default, splits by all the whitespace characters. If the actual string has any other whitespace characters, you might want to use

print(data.split("\n"))   # ['a,b,c', 'd,e,f', 'g,h,i', 'j,k,l']

Or as @Ashwini Chaudhary suggested in the comments, you can use

print(data.splitlines())

Sequelize, convert entity to plain object

Here's what I'm using to get plain response object with non-stringified values and all nested associations from sequelize v4 query.

With plain JavaScript (ES2015+):

const toPlain = response => {
  const flattenDataValues = ({ dataValues }) => {
    const flattenedObject = {};

    Object.keys(dataValues).forEach(key => {
      const dataValue = dataValues[key];

      if (
        Array.isArray(dataValue) &&
        dataValue[0] &&
        dataValue[0].dataValues &&
        typeof dataValue[0].dataValues === 'object'
      ) {
        flattenedObject[key] = dataValues[key].map(flattenDataValues);
      } else if (dataValue && dataValue.dataValues && typeof dataValue.dataValues === 'object') {
        flattenedObject[key] = flattenDataValues(dataValues[key]);
      } else {
        flattenedObject[key] = dataValues[key];
      }
    });

    return flattenedObject;
  };

  return Array.isArray(response) ? response.map(flattenDataValues) : flattenDataValues(response);
};

With lodash (a bit more concise):

const toPlain = response => {
  const flattenDataValues = ({ dataValues }) =>
    _.mapValues(dataValues, value => (
      _.isArray(value) && _.isObject(value[0]) && _.isObject(value[0].dataValues)
        ? _.map(value, flattenDataValues)
        : _.isObject(value) && _.isObject(value.dataValues)
          ? flattenDataValues(value)
          : value
    ));

  return _.isArray(response) ? _.map(response, flattenDataValues) : flattenDataValues(response);
};

Usage:

const res = await User.findAll({
  include: [{
    model: Company,
    as: 'companies',
    include: [{
      model: Member,
      as: 'member',
    }],
  }],
});

const plain = toPlain(res);

// 'plain' now contains simple db object without any getters/setters with following structure:
// [{
//   id: 123,
//   name: 'John',
//   companies: [{
//     id: 234,
//     name: 'Google',
//     members: [{
//       id: 345,
//       name: 'Paul',
//     }]
//   }]
// }]

T-SQL split string based on delimiter

May be this will help you.

SELECT SUBSTRING(myColumn, 1, CASE CHARINDEX('/', myColumn)
            WHEN 0
                THEN LEN(myColumn)
            ELSE CHARINDEX('/', myColumn) - 1
            END) AS FirstName
    ,SUBSTRING(myColumn, CASE CHARINDEX('/', myColumn)
            WHEN 0
                THEN LEN(myColumn) + 1
            ELSE CHARINDEX('/', myColumn) + 1
            END, 1000) AS LastName
FROM MyTable

Warning: session_start(): Cannot send session cookie - headers already sent by (output started at

  1. session_start() must be at the top of your source, no html or other output befor!
  2. your can only send session_start() one time
  3. by this way if(session_status()!=PHP_SESSION_ACTIVE) session_start()

Extracting columns from text file with different delimiters in Linux

If the command should work with both tabs and spaces as the delimiter I would use awk:

awk '{print $100,$101,$102,$103,$104,$105}' myfile > outfile

As long as you just need to specify 5 fields it is imo ok to just type them, for longer ranges you can use a for loop:

awk '{for(i=100;i<=105;i++)print $i}' myfile > outfile

If you want to use cut, you need to use the -f option:

cut -f100-105 myfile > outfile

If the field delimiter is different from TAB you need to specify it using -d:

cut -d' ' -f100-105 myfile > outfile

Check the man page for more info on the cut command.

How to read file with space separated values in pandas

If you can't get text parsing to work using the accepted answer (e.g if your text file contains non uniform rows) then it's worth trying with Python's csv library - here's an example using a user defined Dialect:

 import csv

 csv.register_dialect('skip_space', skipinitialspace=True)
 with open(my_file, 'r') as f:
      reader=csv.reader(f , delimiter=' ', dialect='skip_space')
      for item in reader:
          print(item)

Extract csv file specific columns to list in Python

A standard-lib version (no pandas)

This assumes that the first row of the csv is the headers

import csv

# open the file in universal line ending mode 
with open('test.csv', 'rU') as infile:
  # read the file as a dictionary for each row ({header : value})
  reader = csv.DictReader(infile)
  data = {}
  for row in reader:
    for header, value in row.items():
      try:
        data[header].append(value)
      except KeyError:
        data[header] = [value]

# extract the variables you want
names = data['name']
latitude = data['latitude']
longitude = data['longitude']

Gaussian fit for Python

Actually, you do not need to do a first guess. Simply doing

import matplotlib.pyplot as plt  
from scipy.optimize import curve_fit
from scipy import asarray as ar,exp

x = ar(range(10))
y = ar([0,1,2,3,4,5,4,3,2,1])

n = len(x)                          #the number of data
mean = sum(x*y)/n                   #note this correction
sigma = sum(y*(x-mean)**2)/n        #note this correction

def gaus(x,a,x0,sigma):
    return a*exp(-(x-x0)**2/(2*sigma**2))

popt,pcov = curve_fit(gaus,x,y)
#popt,pcov = curve_fit(gaus,x,y,p0=[1,mean,sigma])

plt.plot(x,y,'b+:',label='data')
plt.plot(x,gaus(x,*popt),'ro:',label='fit')
plt.legend()
plt.title('Fig. 3 - Fit for Time Constant')
plt.xlabel('Time (s)')
plt.ylabel('Voltage (V)')
plt.show()

works fine. This is simpler because making a guess is not trivial. I had more complex data and did not manage to do a proper first guess, but simply removing the first guess worked fine :)

P.S.: use numpy.exp() better, says a warning of scipy

Fastest way to flatten / un-flatten nested JSON objects

Here's my much shorter implementation:

Object.unflatten = function(data) {
    "use strict";
    if (Object(data) !== data || Array.isArray(data))
        return data;
    var regex = /\.?([^.\[\]]+)|\[(\d+)\]/g,
        resultholder = {};
    for (var p in data) {
        var cur = resultholder,
            prop = "",
            m;
        while (m = regex.exec(p)) {
            cur = cur[prop] || (cur[prop] = (m[2] ? [] : {}));
            prop = m[2] || m[1];
        }
        cur[prop] = data[p];
    }
    return resultholder[""] || resultholder;
};

flatten hasn't changed much (and I'm not sure whether you really need those isEmpty cases):

Object.flatten = function(data) {
    var result = {};
    function recurse (cur, prop) {
        if (Object(cur) !== cur) {
            result[prop] = cur;
        } else if (Array.isArray(cur)) {
             for(var i=0, l=cur.length; i<l; i++)
                 recurse(cur[i], prop + "[" + i + "]");
            if (l == 0)
                result[prop] = [];
        } else {
            var isEmpty = true;
            for (var p in cur) {
                isEmpty = false;
                recurse(cur[p], prop ? prop+"."+p : p);
            }
            if (isEmpty && prop)
                result[prop] = {};
        }
    }
    recurse(data, "");
    return result;
}

Together, they run your benchmark in about the half of the time (Opera 12.16: ~900ms instead of ~ 1900ms, Chrome 29: ~800ms instead of ~1600ms).

Note: This and most other solutions answered here focus on speed and are susceptible to prototype pollution and shold not be used on untrusted objects.

ValueError : I/O operation on closed file

Same error can raise by mixing: tabs + spaces.

with open('/foo', 'w') as f:
 (spaces OR  tab) print f       <-- success
 (spaces AND tab) print f       <-- fail

Using sed to split a string with a delimiter

To split a string with a delimiter with GNU sed you say:

sed 's/delimiter/\n/g'     # GNU sed

For example, to split using : as a delimiter:

$ sed 's/:/\n/g' <<< "he:llo:you"
he
llo
you

Or with a non-GNU sed:

$ sed $'s/:/\\\n/g' <<< "he:llo:you"
he
llo
you

In this particular case, you missed the g after the substitution. Hence, it is just done once. See:

$ echo "string1:string2:string3:string4:string5" | sed s/:/\\n/g
string1
string2
string3
string4
string5

g stands for global and means that the substitution has to be done globally, that is, for any occurrence. See that the default is 1 and if you put for example 2, it is done 2 times, etc.

All together, in your case you would need to use:

sed 's/:/\\n/g' ~/Desktop/myfile.txt

Note that you can directly use the sed ... file syntax, instead of unnecessary piping: cat file | sed.

PHP CSV string to array

Try this, it's working for me:

$delimiter = ",";
  $enclosure = '"';
  $escape = "\\" ;
  $rows = array_filter(explode(PHP_EOL, $content));
  $header = NULL;
  $data = [];

  foreach($rows as $row)
  {
    $row = str_getcsv ($row, $delimiter, $enclosure , $escape);

    if(!$header) {
      $header = $row;
    } else {
      $data[] = array_combine($header, $row);
    }
  }

How to copy from CSV file to PostgreSQL table with headers in CSV file?

Alternative by terminal with no permission

The pg documentation at NOTES say

The path will be interpreted relative to the working directory of the server process (normally the cluster's data directory), not the client's working directory.

So, gerally, using psql or any client, even in a local server, you have problems ... And, if you're expressing COPY command for other users, eg. at a Github README, the reader will have problems ...

The only way to express relative path with client permissions is using STDIN,

When STDIN or STDOUT is specified, data is transmitted via the connection between the client and the server.

as remembered here:

psql -h remotehost -d remote_mydb -U myuser -c \
   "copy mytable (column1, column2) from STDIN with delimiter as ','" \
   < ./relative_path/file.csv

Can pandas automatically recognize dates?

You should add parse_dates=True, or parse_dates=['column name'] when reading, thats usually enough to magically parse it. But there are always weird formats which need to be defined manually. In such a case you can also add a date parser function, which is the most flexible way possible.

Suppose you have a column 'datetime' with your string, then:

from datetime import datetime
dateparse = lambda x: datetime.strptime(x, '%Y-%m-%d %H:%M:%S')

df = pd.read_csv(infile, parse_dates=['datetime'], date_parser=dateparse)

This way you can even combine multiple columns into a single datetime column, this merges a 'date' and a 'time' column into a single 'datetime' column:

dateparse = lambda x: datetime.strptime(x, '%Y-%m-%d %H:%M:%S')

df = pd.read_csv(infile, parse_dates={'datetime': ['date', 'time']}, date_parser=dateparse)

You can find directives (i.e. the letters to be used for different formats) for strptime and strftime in this page.

How to split data into trainset and testset randomly?

The following produces more general k-fold cross-validation splits. Your 50-50 partitioning would be achieved by making k=2 below, all you would have to to is to pick one of the two partitions produced. Note: I haven't tested the code, but I'm pretty sure it should work.

import random, math

def k_fold(myfile, myseed=11109, k=3):
    # Load data
    data = open(myfile).readlines()

    # Shuffle input
    random.seed=myseed
    random.shuffle(data)

    # Compute partition size given input k
    len_part=int(math.ceil(len(data)/float(k)))

    # Create one partition per fold
    train={}
    test={}
    for ii in range(k):
        test[ii]  = data[ii*len_part:ii*len_part+len_part]
        train[ii] = [jj for jj in data if jj not in test[ii]]

    return train, test      

How to read one single line of csv data in Python?

Just for reference, a for loop can be used after getting the first row to get the rest of the file:

with open('file.csv', newline='') as f:
    reader = csv.reader(f)
    row1 = next(reader)  # gets the first line
    for row in reader:
        print(row)       # prints rows 2 and onward

Python readlines() usage and efficient practice for reading

Read line by line, not the whole file:

for line in open(file_name, 'rb'):
    # process line here

Even better use with for automatically closing the file:

with open(file_name, 'rb') as f:
    for line in f:
        # process line here

The above will read the file object using an iterator, one line at a time.

Using the grep and cut delimiter command (in bash shell scripting UNIX) - and kind of "reversing" it?

You don't need to change the delimiter to display the right part of the string with cut.

The -f switch of the cut command is the n-TH element separated by your delimiter : :, so you can just type :

 grep puddle2_1557936 | cut -d ":" -f2

Another solutions (adapt it a bit) if you want fun :

Using :

grep -oP 'puddle2_1557936:\K.*' <<< 'puddle2_1557936:/home/rogers.williams/folderz/puddle2'                                                                        
/home/rogers.williams/folderz/puddle2

or still with look around

grep -oP '(?<=puddle2_1557936:).*' <<< 'puddle2_1557936:/home/rogers.williams/folderz/puddle2'                                                                    
/home/rogers.williams/folderz/puddle2

or with :

perl -lne '/puddle2_1557936:(.*)/ and print $1' <<< 'puddle2_1557936:/home/rogers.williams/folderz/puddle2'                                                      
/home/rogers.williams/folderz/puddle2

or using (thanks to glenn jackman)

ruby -F: -ane '/puddle2_1557936/ and puts $F[1]' <<< 'puddle2_1557936:/home/rogers.williams/folderz/puddle2'
/home/rogers.williams/folderz/puddle2

or with :

awk -F'puddle2_1557936:' '{print $2}'  <<< 'puddle2_1557936:/home/rogers.williams/folderz/puddle2'
/home/rogers.williams/folderz/puddle2

or with :

python -c 'import sys; print(sys.argv[1].split("puddle2_1557936:")[1])' 'puddle2_1557936:/home/rogers.williams/folderz/puddle2'
/home/rogers.williams/folderz/puddle2

or using only :

IFS=: read _ a <<< "puddle2_1557936:/home/rogers.williams/folderz/puddle2"
echo "$a"
/home/rogers.williams/folderz/puddle2

or using in a :

js<<EOF
var x = 'puddle2_1557936:/home/rogers.williams/folderz/puddle2'
print(x.substr(x.indexOf(":")+1))
EOF
/home/rogers.williams/folderz/puddle2

or using in a :

php -r 'preg_match("/puddle2_1557936:(.*)/", $argv[1], $m); echo "$m[1]\n";' 'puddle2_1557936:/home/rogers.williams/folderz/puddle2' 
/home/rogers.williams/folderz/puddle2

pandas: How do I split text in a column into multiple rows?

Differently from Dan, I consider his answer quite elegant... but unfortunately it is also very very inefficient. So, since the question mentioned "a large csv file", let me suggest to try in a shell Dan's solution:

time python -c "import pandas as pd;
df = pd.DataFrame(['a b c']*100000, columns=['col']);
print df['col'].apply(lambda x : pd.Series(x.split(' '))).head()"

... compared to this alternative:

time python -c "import pandas as pd;
from scipy import array, concatenate;
df = pd.DataFrame(['a b c']*100000, columns=['col']);
print pd.DataFrame(concatenate(df['col'].apply( lambda x : [x.split(' ')]))).head()"

... and this:

time python -c "import pandas as pd;
df = pd.DataFrame(['a b c']*100000, columns=['col']);
print pd.DataFrame(dict(zip(range(3), [df['col'].apply(lambda x : x.split(' ')[i]) for i in range(3)]))).head()"

The second simply refrains from allocating 100 000 Series, and this is enough to make it around 10 times faster. But the third solution, which somewhat ironically wastes a lot of calls to str.split() (it is called once per column per row, so three times more than for the others two solutions), is around 40 times faster than the first, because it even avoids to instance the 100 000 lists. And yes, it is certainly a little ugly...

EDIT: this answer suggests how to use "to_list()" and to avoid the need for a lambda. The result is something like

time python -c "import pandas as pd;
df = pd.DataFrame(['a b c']*100000, columns=['col']);
print pd.DataFrame(df.col.str.split().tolist()).head()"

which is even more efficient than the third solution, and certainly much more elegant.

EDIT: the even simpler

time python -c "import pandas as pd;
df = pd.DataFrame(['a b c']*100000, columns=['col']);
print pd.DataFrame(list(df.col.str.split())).head()"

works too, and is almost as efficient.

EDIT: even simpler! And handles NaNs (but less efficient):

time python -c "import pandas as pd;
df = pd.DataFrame(['a b c']*100000, columns=['col']);
print df.col.str.split(expand=True).head()"

How to use delimiter for csv in python

ok, here is what i understood from your question. You are writing a csv file from python but when you are opening that file into some other application like excel or open office they are showing the complete row in one cell rather than each word in individual cell. I am right??

if i am then please try this,

import csv

with open(r"C:\\test.csv", "wb") as csv_file:
    writer = csv.writer(csv_file, delimiter =",",quoting=csv.QUOTE_MINIMAL)
    writer.writerow(["a","b"])

you have to set the delimiter = ","

How to use python numpy.savetxt to write strings and float number to an ASCII file?

You have to specify the format (fmt) of you data in savetxt, in this case as a string (%s):

num.savetxt('test.txt', DAT, delimiter=" ", fmt="%s") 

The default format is a float, that is the reason it was expecting a float instead of a string and explains the error message.

Read specific columns from a csv file with csv module?

Context: For this type of work you should use the amazing python petl library. That will save you a lot of work and potential frustration from doing things 'manually' with the standard csv module. AFAIK, the only people who still use the csv module are those who have not yet discovered better tools for working with tabular data (pandas, petl, etc.), which is fine, but if you plan to work with a lot of data in your career from various strange sources, learning something like petl is one of the best investments you can make. To get started should only take 30 minutes after you've done pip install petl. The documentation is excellent.

Answer: Let's say you have the first table in a csv file (you can also load directly from the database using petl). Then you would simply load it and do the following.

from petl import fromcsv, look, cut, tocsv 

#Load the table
table1 = fromcsv('table1.csv')
# Alter the colums
table2 = cut(table1, 'Song_Name','Artist_ID')
#have a quick look to make sure things are ok. Prints a nicely formatted table to your console
print look(table2)
# Save to new file
tocsv(table2, 'new.csv')

Splitting String with delimiter

dependencies {
   compile ('org.springframework.kafka:spring-kafka-test:2.2.7.RELEASE') { dep ->
     ['org.apache.kafka:kafka_2.11','org.apache.kafka:kafka-clients'].each { i ->
       def (g, m) = i.tokenize( ':' )
       dep.exclude group: g  , module: m
     }
   }
}

How do you create nested dict in Python?

This thing is empty nested list from which ne will append data to empty dict

ls = [['a','a1','a2','a3'],['b','b1','b2','b3'],['c','c1','c2','c3'], 
['d','d1','d2','d3']]

this means to create four empty dict inside data_dict

data_dict = {f'dict{i}':{} for i in range(4)}
for i in range(4):
    upd_dict = {'val' : ls[i][0], 'val1' : ls[i][1],'val2' : ls[i][2],'val3' : ls[i][3]}

    data_dict[f'dict{i}'].update(upd_dict)

print(data_dict)

The output

{'dict0': {'val': 'a', 'val1': 'a1', 'val2': 'a2', 'val3': 'a3'}, 'dict1': {'val': 'b', 'val1': 'b1', 'val2': 'b2', 'val3': 'b3'},'dict2': {'val': 'c', 'val1': 'c1', 'val2': 'c2', 'val3': 'c3'}, 'dict3': {'val': 'd', 'val1': 'd1', 'val2': 'd2', 'val3': 'd3'}}

Merging two CSV files using Python

When I'm working with csv files, I often use the pandas library. It makes things like this very easy. For example:

import pandas as pd

a = pd.read_csv("filea.csv")
b = pd.read_csv("fileb.csv")
b = b.dropna(axis=1)
merged = a.merge(b, on='title')
merged.to_csv("output.csv", index=False)

Some explanation follows. First, we read in the csv files:

>>> a = pd.read_csv("filea.csv")
>>> b = pd.read_csv("fileb.csv")
>>> a
   title  stage    jan    feb
0   darn  3.001  0.421  0.532
1     ok  2.829  1.036  0.751
2  three  1.115  1.146  2.921
>>> b
   title    mar    apr    may       jun  Unnamed: 5
0   darn  0.631  1.321  0.951    1.7510         NaN
1     ok  1.001  0.247  2.456    0.3216         NaN
2  three  0.285  1.283  0.924  956.0000         NaN

and we see there's an extra column of data (note that the first line of fileb.csv -- title,mar,apr,may,jun, -- has an extra comma at the end). We can get rid of that easily enough:

>>> b = b.dropna(axis=1)
>>> b
   title    mar    apr    may       jun
0   darn  0.631  1.321  0.951    1.7510
1     ok  1.001  0.247  2.456    0.3216
2  three  0.285  1.283  0.924  956.0000

Now we can merge a and b on the title column:

>>> merged = a.merge(b, on='title')
>>> merged
   title  stage    jan    feb    mar    apr    may       jun
0   darn  3.001  0.421  0.532  0.631  1.321  0.951    1.7510
1     ok  2.829  1.036  0.751  1.001  0.247  2.456    0.3216
2  three  1.115  1.146  2.921  0.285  1.283  0.924  956.0000

and finally write this out:

>>> merged.to_csv("output.csv", index=False)

producing:

title,stage,jan,feb,mar,apr,may,jun
darn,3.001,0.421,0.532,0.631,1.321,0.951,1.751
ok,2.829,1.036,0.751,1.001,0.247,2.456,0.3216
three,1.115,1.146,2.921,0.285,1.283,0.924,956.0

How to get the second column from command output?

If you have GNU awk this is the solution you want:

$ awk '{print $1}' FPAT='"[^"]+"' file
"A B"
"C"
"D"

Declare variable MySQL trigger

Agree with neubert about the DECLARE statements, this will fix syntax error. But I would suggest you to avoid using openning cursors, they may be slow.

For your task: use INSERT...SELECT statement which will help you to copy data from one table to another using only one query.

INSERT ... SELECT Syntax.

How to add column to numpy array

It can be done like this:

import numpy as np

# create a random matrix:
A = np.random.normal(size=(5,2))

# add a column of zeros to it:
print(np.hstack((A,np.zeros((A.shape[0],1)))))

In general, if A is an m*n matrix, and you need to add a column, you have to create an n*1 matrix of zeros, then use "hstack" to add the matrix of zeros to the right of the matrix A.

MySQL create stored procedure syntax with delimiter

Here is my code to create procedure in MySQL :

DELIMITER $$
CREATE DEFINER=`root`@`localhost` PROCEDURE `procedureName`(IN comId int)
BEGIN
select * from tableName 
         (add joins OR sub query as per your requirement)
         Where (where condition here)
END $$
DELIMITER ;

To call this procedure use this query :

call procedureName(); // without parameter
call procedureName(id,pid); // with parameter

Detail :

1) DEFINER : root is the user name and change it as per your username of mysql localhost is the host you can change it with ip address of the server if you are execute this query on hosting server.

Read here for more detail

Replace new lines with a comma delimiter with Notepad++?

A regex match with \s+ worked for me:

enter image description here

Import CSV file into SQL Server

All of the answers here work great if your data is "clean" (no data constraint violations, etc.) and you have access to putting the file on the server. Some of the answers provided here stop at the first error (PK violation, data-loss error, etc.) and give you one error at a time if using SSMS's built in Import Task. If you want to gather all errors at once (in case you want to tell the person that gave you the .csv file to clean up their data), I recommend the following as an answer. This answer also gives you complete flexibility as you are "writing" the SQL yourself.

Note: I'm going to assume you are running a Windows OS and have access to Excel and SSMS. If not, I'm sure you can tweak this answer to fit your needs.

  1. Using Excel, open your .csv file. In an empty column you will write a formula that will build individual INSERTstatements like =CONCATENATE("INSERT INTO dbo.MyTable (FirstName, LastName) VALUES ('", A1, "', '", B1,"')", CHAR(10), "GO") where A1 is a cell that has the first name data and A2 has the last name data for example.

    • CHAR(10) adds a newline character to the final result and GO will allow us to run this INSERT and continue to the next even if there are any errors.
  2. Highlight the cell with your =CONCATENATION() formula

  3. Shift + End to highlight the same column in the rest of your rows

  4. In the ribbon > Home > Editing > Fill > Click Down

    • This applies the formula all the way down the sheet so you don't have to copy-paste, drag, etc. down potentially thousands of rows by hand
  5. Ctrl + C to copy the formulated SQL INSERT statements

  6. Paste into SSMS

  7. You will notice Excel, probably unexpectedly, added double quotes around each of your INSERT and GO commands. This is a "feature" (?) of copying multi-line values out of Excel. You can simply find and replace "INSERT and GO" with INSERT and GO respectively to clean that up.

  8. Finally you are ready to run your import process

  9. After the process completes, check the Messages window for any errors. You can select all the content (Ctrl + A) and copy into Excel and use a column filter to remove any successful messages and you are left with any and all the errors.

This process will definitely take longer than other answers here, but if your data is "dirty" and full of SQL violations, you can at least gather all the errors at one time and send them to the person that gave you the data, if that is your scenario.

Import-CSV and Foreach

$IP_Array = (Get-Content test2.csv)[0].split(",")
foreach ( $IP in $IP_Array){
    $IP
}

Get-content Filename returns an array of strings for each line.

On the first string only, I split it based on ",". Dumping it into $IP_Array.

$IP_Array = (Get-Content test2.csv)[0].split(",")
foreach ( $IP in $IP_Array){
  if ($IP -eq "2.2.2.2") {
    Write-Host "Found $IP"
  }
}

Splitting on last delimiter in Python string?

I just did this for fun

    >>> s = 'a,b,c,d'
    >>> [item[::-1] for item in s[::-1].split(',', 1)][::-1]
    ['a,b,c', 'd']

Caution: Refer to the first comment in below where this answer can go wrong.

Load text file as strings using numpy.loadtxt()

Use genfromtxt instead. It's a much more general method than loadtxt:

import numpy as np
print np.genfromtxt('col.txt',dtype='str')

Using the file col.txt:

foo bar
cat dog
man wine

This gives:

[['foo' 'bar']
 ['cat' 'dog']
 ['man' 'wine']]

If you expect that each row has the same number of columns, read the first row and set the attribute filling_values to fix any missing rows.

Displaying better error message than "No JSON object could be decoded"

You could try the rson library found here: http://code.google.com/p/rson/ . I it also up on PYPI: https://pypi.python.org/pypi/rson/0.9 so you can use easy_install or pip to get it.

for the example given by tom:

>>> rson.loads('[1,2,]')
...
rson.base.tokenizer.RSONDecodeError: Unexpected trailing comma: line 1, column 6, text ']'

RSON is a designed to be a superset of JSON, so it can parse JSON files. It also has an alternate syntax which is much nicer for humans to look at and edit. I use it quite a bit for input files.

As for the capitalizing of boolean values: it appears that rson reads incorrectly capitalized booleans as strings.

>>> rson.loads('[true,False]')
[True, u'False']

Reading From A Text File - Batch

Your code "for /f "tokens=* delims=" %%x in (a.txt) do echo %%x" will work on most Windows Operating Systems unless you have modified commands.

So you could instead "cd" into the directory to read from before executing the "for /f" command to follow out the string. For instance if the file "a.txt" is located at C:\documents and settings\%USERNAME%\desktop\a.txt then you'd use the following.

cd "C:\documents and settings\%USERNAME%\desktop"
for /f "tokens=* delims=" %%x in (a.txt) do echo %%x
echo.
echo.
echo.
pause >nul
exit

But since this doesn't work on your computer for x reason there is an easier and more efficient way of doing this. Using the "type" command.

@echo off
color a
cls
cd "C:\documents and settings\%USERNAME%\desktop"
type a.txt
echo.
echo.
pause >nul
exit

Or if you'd like them to select the file from which to write in the batch you could do the following.

@echo off
:A
color a
cls
echo Choose the file that you want to read.
echo.
echo.
tree
echo.
echo.
echo.
set file=
set /p file=File:
cls
echo Reading from %file%
echo.
type %file%
echo.
echo.
echo.
set re=
set /p re=Y/N?:
if %re%==Y goto :A
if %re%==y goto :A
exit

Parse (split) a string in C++ using string delimiter (standard C++)

You can use next function to split string:

vector<string> split(const string& str, const string& delim)
{
    vector<string> tokens;
    size_t prev = 0, pos = 0;
    do
    {
        pos = str.find(delim, prev);
        if (pos == string::npos) pos = str.length();
        string token = str.substr(prev, pos-prev);
        if (!token.empty()) tokens.push_back(token);
        prev = pos + delim.length();
    }
    while (pos < str.length() && prev < str.length());
    return tokens;
}

Creating a dictionary from a CSV file

If you are OK with using the numpy package, then you can do something like the following:

import numpy as np

lines = np.genfromtxt("coors.csv", delimiter=",", dtype=None)
my_dict = dict()
for i in range(len(lines)):
   my_dict[lines[i][0]] = lines[i][1]

reading and parsing a TSV file, then manipulating it for saving as CSV (*efficiently*)

You should use the csv module to read the tab-separated value file. Do not read it into memory in one go. Each row you read has all the information you need to write rows to the output CSV file, after all. Keep the output file open throughout.

import csv

with open('sample.txt', newline='') as tsvin, open('new.csv', 'w', newline='') as csvout:
    tsvin = csv.reader(tsvin, delimiter='\t')
    csvout = csv.writer(csvout)

    for row in tsvin:
        count = int(row[4])
        if count > 0:
            csvout.writerows([row[2:4] for _ in range(count)])

or, using the itertools module to do the repeating with itertools.repeat():

from itertools import repeat
import csv

with open('sample.txt', newline='') as tsvin, open('new.csv', 'w', newline='') as csvout:
    tsvin = csv.reader(tsvin, delimiter='\t')
    csvout = csv.writer(csvout)

    for row in tsvin:
        count = int(row[4])
        if count > 0:
            csvout.writerows(repeat(row[2:4], count))

How to extract a string between two delimiters

If there is only 1 occurrence, the answer of ivanovic is the best way I guess. But if there are many occurrences, you should use regexp:

\[(.*?)\] this is your pattern. And in each group(1) will get you your string.

Pattern p = Pattern.compile("\\[(.*?)\\]");
Matcher m = p.matcher(input);
while(m.find())
{
    m.group(1); //is your string. do what you want
}

Hive load CSV with commas in quoted fields

keep the delimiter in single quotes it will work.

ROW FORMAT DELIMITED 
FIELDS TERMINATED BY ','
LINES TERMINATED BY '\n';

This will work

plot data from CSV file with matplotlib

I'm guessing

x= data[:,0]
y= data[:,1]

How to replace space with comma using sed?

Try the following command and it should work out for you.

sed "s/\s/,/g" orignalFive.csv > editedFinal.csv

openpyxl - adjust column width size

I have a problem with merged_cells and autosize not work correctly, if you have the same problem, you can solve with the next code:

for col in worksheet.columns:
    max_length = 0
    column = col[0].column # Get the column name
    for cell in col:
        if cell.coordinate in worksheet.merged_cells: # not check merge_cells
            continue
        try: # Necessary to avoid error on empty cells
            if len(str(cell.value)) > max_length:
                max_length = len(cell.value)
        except:
            pass
    adjusted_width = (max_length + 2) * 1.2
    worksheet.column_dimensions[column].width = adjusted_width

pandas read_csv index_col=None not working with delimiters at the end of each line

Quick Answer

Use index_col=False instead of index_col=None when you have delimiters at the end of each line to turn off index column inference and discard the last column.

More Detail

After looking at the data, there is a comma at the end of each line. And this quote (the documentation has been edited since the time this post was created):

index_col: column number, column name, or list of column numbers/names, to use as the index (row labels) of the resulting DataFrame. By default, it will number the rows without using any column, unless there is one more data column than there are headers, in which case the first column is taken as the index.

from the documentation shows that pandas believes you have n headers and n+1 data columns and is treating the first column as the index.


EDIT 10/20/2014 - More information

I found another valuable entry that is specifically about trailing limiters and how to simply ignore them:

If a file has one more column of data than the number of column names, the first column will be used as the DataFrame’s row names: ...

Ordinarily, you can achieve this behavior using the index_col option.

There are some exception cases when a file has been prepared with delimiters at the end of each data line, confusing the parser. To explicitly disable the index column inference and discard the last column, pass index_col=False: ...

MySQL - Trigger for updating same table after insert

On the last entry; this is another trick:

SELECT AUTO_INCREMENT FROM information_schema.tables WHERE table_schema = ... and table_name = ...

Convert text to columns in Excel using VBA

Try this

Sub Txt2Col()
    Dim rng As Range

    Set rng = [C7]
    Set rng = Range(rng, Cells(Rows.Count, rng.Column).End(xlUp))

    rng.TextToColumns Destination:=rng, DataType:=xlDelimited, ' rest of your settings

Update: button click event to act on another sheet

Private Sub CommandButton1_Click()
    Dim rng As Range
    Dim sh As Worksheet

    Set sh = Worksheets("Sheet2")
    With sh
        Set rng = .[C7]
        Set rng = .Range(rng, .Cells(.Rows.Count, rng.Column).End(xlUp))

        rng.TextToColumns Destination:=rng, DataType:=xlDelimited, _
        TextQualifier:=xlDoubleQuote,  _
        ConsecutiveDelimiter:=False, _
        Tab:=False, _
        Semicolon:=False, _
        Comma:=True, 
        Space:=False, 
        Other:=False, _
        FieldInfo:=Array(Array(1, xlGeneralFormat), Array(2, xlGeneralFormat), Array(3, xlGeneralFormat)), _
        TrailingMinusNumbers:=True
    End With
End Sub

Note the .'s (eg .Range) they refer to the With statement object

How to use numpy.genfromtxt when first column is string and the remaining columns are numbers?

data=np.genfromtxt(csv_file, delimiter=',', dtype='unicode')

It works fine for me.

Using multiple delimiters in awk

Another one is to use the -F option but pass it regex to print the text between left and or right parenthesis ().

The file content:

528(smbw)
529(smbt)
530(smbn)
10115(smbs)

The command:

awk -F"[()]" '{print $2}' filename

result:

smbw
smbt
smbn
smbs

Using awk to just print the text between []:

Use awk -F'[][]' but awk -F'[[]]' will not work.

http://stanlo45.blogspot.com/2020/06/awk-multiple-field-separators.html

Text File Parsing with Python

I would use a for loop to iterate over the lines in the text file:

for line in my_text:
    outputfile.writelines(data_parser(line, reps))

If you want to read the file line-by-line instead of loading the whole thing at the start of the script you could do something like this:

inputfile = open('test.dat')
outputfile = open('test.csv', 'w')

# sample text string, just for demonstration to let you know how the data looks like
# my_text = '"2012-06-23 03:09:13.23",4323584,-1.911224,-0.4657288,-0.1166382,-0.24823,0.256485,"NAN",-0.3489428,-0.130449,-0.2440527,-0.2942413,0.04944348,0.4337797,-1.105218,-1.201882,-0.5962594,-0.586636'

# dictionary definition 0-, 1- etc. are there to parse the date block delimited with dashes, and make sure the negative numbers are not effected
reps = {'"NAN"':'NAN', '"':'', '0-':'0,','1-':'1,','2-':'2,','3-':'3,','4-':'4,','5-':'5,','6-':'6,','7-':'7,','8-':'8,','9-':'9,', ' ':',', ':':',' }

for i in range(4): inputfile.next() # skip first four lines
for line in inputfile:
    outputfile.writelines(data_parser(line, reps))

inputfile.close()
outputfile.close()

MySQL Trigger: Delete From Table AFTER DELETE

I think there is an error in the trigger code. As you want to delete all rows with the deleted patron ID, you have to use old.id (Otherwise it would delete other IDs)

Try this as the new trigger:

CREATE TRIGGER log_patron_delete AFTER DELETE on patrons
FOR EACH ROW
BEGIN
DELETE FROM patron_info
    WHERE patron_info.pid = old.id;
END

Dont forget the ";" on the delete query. Also if you are entering the TRIGGER code in the console window, make use of the delimiters also.

How to display the first few characters of a string in Python?

If you want first 2 letters and last 2 letters of a string then you can use the following code: name = "India" name[0:2]="In" names[-2:]="ia"

How can I read comma separated values from a text file in Java?

You can also use the java.util.Scanner class.

private static void readFileWithScanner() {
    File file = new File("path/to/your/file/file.txt");

    Scanner scan = null;

    try {
        scan = new Scanner(file);

        while (scan.hasNextLine()) {
            String line = scan.nextLine();
            String[] lineArray = line.split(",");
            // do something with lineArray, such as instantiate an object
    } catch (FileNotFoundException e) {
        e.printStackTrace();
    } finally {
        scan.close();
    }
}

MySQL Stored procedure variables from SELECT statements

You simply need to enclose your SELECT statements in parentheses to indicate that they are subqueries:

SET cityLat = (SELECT cities.lat FROM cities WHERE cities.id = cityID);

Alternatively, you can use MySQL's SELECT ... INTO syntax. One advantage of this approach is that both cityLat and cityLng can be assigned from a single table-access:

SELECT lat, lng INTO cityLat, cityLng FROM cities WHERE id = cityID;

However, the entire procedure can be replaced with a single self-joined SELECT statement:

SELECT   b.*, HAVERSINE(a.lat, a.lng, b.lat, b.lng) AS dist
FROM     cities AS a, cities AS b
WHERE    a.id = cityID
ORDER BY dist
LIMIT    10;

Troubleshooting "Warning: session_start(): Cannot send session cache limiter - headers already sent"

This should solve your problem. session_start() should be called before any character is sent back to the browser. In your case, HTML and blank lines were sent before you called session_start(). Documentation here.

To further explain your question of why it works when you submit to a different page, that page either do not use session_start() or calls session_start() before sending any character back to the client! This page on the other hand was calling session_start() much later when a lot of HTML has been sent back to the client (browser).

The better way to code is to have a common header file that calls connects to MySQL database, calls session_start() and does other common things for all pages and include that file on top of each page like below:

include "header.php";

This will stop issues like you are having as also allow you to have a common set of code to manage across a project. Something definitely for you to think about I would suggest after looking at your code.

<?php
session_start();

                if (isset($_SESSION['error']))

                {

                    echo "<span id=\"error\"><p>" . $_SESSION['error'] . "</p></span>";

                    unset($_SESSION['error']);

                }

                ?>

                <form action="<?php echo $_SERVER['PHP_SELF']; ?>" method="post" enctype="multipart/form-data">

                <p>
                 <label class="style4">Category Name</label>

                   &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <input type="text" name="categoryname" /><br /><br />

                    <label class="style4">Category Image</label>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;

                    <input type="file" name="image" /><br />

                    <input type="hidden" name="MAX_FILE_SIZE" value="100000" />

                   <br />
<br />
 <input type="submit" id="submit" value="UPLOAD" />

                </p>

                </form>




                             <?php



require("includes/conn.php");


function is_valid_type($file)

{

    $valid_types = array("image/jpg", "image/jpeg", "image/bmp", "image/gif", "image/png");



    if (in_array($file['type'], $valid_types))

        return 1;

    return 0;
}

function showContents($array)

{

    echo "<pre>";

    print_r($array);

    echo "</pre>";
}


$TARGET_PATH = "images/category";

$cname = $_POST['categoryname'];

$image = $_FILES['image'];

$cname = mysql_real_escape_string($cname);

$image['name'] = mysql_real_escape_string($image['name']);

$TARGET_PATH .= $image['name'];

if ( $cname == "" || $image['name'] == "" )

{

    $_SESSION['error'] = "All fields are required";

    header("Location: managecategories.php");

    exit;

}

if (!is_valid_type($image))

{

    $_SESSION['error'] = "You must upload a jpeg, gif, or bmp";

    header("Location: managecategories.php");

    exit;

}




if (file_exists($TARGET_PATH))

{

    $_SESSION['error'] = "A file with that name already exists";

    header("Location: managecategories.php");

    exit;

}


if (move_uploaded_file($image['tmp_name'], $TARGET_PATH))

{



    $sql = "insert into Categories (CategoryName, FileName) values ('$cname', '" . $image['name'] . "')";

    $result = mysql_query($sql) or die ("Could not insert data into DB: " . mysql_error());

  header("Location: mangaecategories.php");

    exit;

}

else

{





    $_SESSION['error'] = "Could not upload file.  Check read/write persmissions on the directory";

    header("Location: mangagecategories.php");

    exit;

}

?> 

How to split one string into multiple variables in bash shell?

If you know it's going to be just two fields, you can skip the extra subprocesses like this:

var1=${STR%-*}
var2=${STR#*-}

What does this do? ${STR%-*} deletes the shortest substring of $STR that matches the pattern -* starting from the end of the string. ${STR#*-} does the same, but with the *- pattern and starting from the beginning of the string. They each have counterparts %% and ## which find the longest anchored pattern match. If anyone has a helpful mnemonic to remember which does which, let me know! I always have to try both to remember.

How to read numbers separated by space using scanf

It should be as simple as using a list of receiving variables:

scanf("%i %i %i", &var1, &var2, &var3);

UnicodeDecodeError: 'ascii' codec can't decode byte 0xd1 in position 2: ordinal not in range(128)

for Python 3 users. you can do

with open(csv_name_here, 'r', encoding="utf-8") as f:
    #some codes

it works with flask too :)

C split a char array into different variables

I came up with this.This seems to work best for me.It converts a string of number and splits it into array of integer:

void splitInput(int arr[], int sizeArr, char num[])
{
    for(int i = 0; i < sizeArr; i++)
        // We are subtracting 48 because the numbers in ASCII starts at 48.
        arr[i] = (int)num[i] - 48;
}

PHP convert date format dd/mm/yyyy => yyyy-mm-dd

Here's another solution not using date(). not so smart:)

$var = '20/04/2012';
echo implode("-", array_reverse(explode("/", $var)));

Hyphen, underscore, or camelCase as word delimiter in URIs?

here's the best of both worlds.

I also "like" underscores, besides all your positive points about them, there is also a certain old-school style to them.

So what I do is use underscores and simply add a small rewrite rule to your Apache's .htaccess file to re-write all underscores to hyphens.

https://yoast.com/apache-rewrite-dash-underscore/

Delimiters in MySQL

The DELIMITER statement changes the standard delimiter which is semicolon ( ;) to another. The delimiter is changed from the semicolon( ;) to double-slashes //.

Why do we have to change the delimiter?

Because we want to pass the stored procedure, custom functions etc. to the server as a whole rather than letting mysql tool to interpret each statement at a time.

How can I generate a list or array of sequential integers in Java?

With Java 8 it is so simple so it doesn't even need separate method anymore:

List<Integer> range = IntStream.rangeClosed(start, end)
    .boxed().collect(Collectors.toList());

ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()

The reason for the exception is that and implicitly calls bool. First on the left operand and (if the left operand is True) then on the right operand. So x and y is equivalent to bool(x) and bool(y).

However the bool on a numpy.ndarray (if it contains more than one element) will throw the exception you have seen:

>>> import numpy as np
>>> arr = np.array([1, 2, 3])
>>> bool(arr)
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()

The bool() call is implicit in and, but also in if, while, or, so any of the following examples will also fail:

>>> arr and arr
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()

>>> if arr: pass
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()

>>> while arr: pass
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()

>>> arr or arr
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()

There are more functions and statements in Python that hide bool calls, for example 2 < x < 10 is just another way of writing 2 < x and x < 10. And the and will call bool: bool(2 < x) and bool(x < 10).

The element-wise equivalent for and would be the np.logical_and function, similarly you could use np.logical_or as equivalent for or.

For boolean arrays - and comparisons like <, <=, ==, !=, >= and > on NumPy arrays return boolean NumPy arrays - you can also use the element-wise bitwise functions (and operators): np.bitwise_and (& operator)

>>> np.logical_and(arr > 1, arr < 3)
array([False,  True, False], dtype=bool)

>>> np.bitwise_and(arr > 1, arr < 3)
array([False,  True, False], dtype=bool)

>>> (arr > 1) & (arr < 3)
array([False,  True, False], dtype=bool)

and bitwise_or (| operator):

>>> np.logical_or(arr <= 1, arr >= 3)
array([ True, False,  True], dtype=bool)

>>> np.bitwise_or(arr <= 1, arr >= 3)
array([ True, False,  True], dtype=bool)

>>> (arr <= 1) | (arr >= 3)
array([ True, False,  True], dtype=bool)

A complete list of logical and binary functions can be found in the NumPy documentation:

Converting ArrayList to Array in java

Here is the solution for you given scenario -

List<String>ls = new ArrayList<String>();
    ls.add("dfsa#FSDfsd");
    ls.add("dfsdaor#ooiui");
    String[] firstArray = new String[ls.size()];    
 firstArray =ls.toArray(firstArray);
String[] secondArray = new String[ls.size()];
for(int i=0;i<ls.size();i++){
secondArray[i]=firstArray[i].split("#")[0];
firstArray[i]=firstArray[i].split("#")[1];
} 

How to load a tsv file into a Pandas DataFrame?

df = pd.read_csv('filename.csv', sep='\t', header=0)

You can load the tsv file directly into pandas data frame by specifying delimitor and header.

How to merge every two lines into one from the command line?

Another solutions using vim (just for reference).

Solution 1:

Open file in vim vim filename, then execute command :% normal Jj

This command is very easy to understand:

  • % : for all the lines,
  • normal : execute normal command
  • Jj : execute Join command, then jump to below line

After that, save the file and exit with :wq

Solution 2:

Execute the command in shell, vim -c ":% normal Jj" filename, then save the file and exit with :wq.

Alternate output format for psql

Also be sure to check out \H, which toggles HTML output on/off. Not necessarily easy to read at the console, but interesting for dumping into a file (see \o) or pasting into an editor/browser window for viewing, especially with multiple rows of relatively complex data.

Simple Digit Recognition OCR in OpenCV-Python

OCR which stands for Optical Character Recognition is a computer vision technique used to identify the different types of handwritten digits that are used in common mathematics. To perform OCR in OpenCV we will use the KNN algorithm which detects the nearest k neighbors of a particular data point and then classifies that data point based on the class type detected for n neighbors.

Data Used


This data contains 5000 handwritten digits where there are 500 digits for every type of digit. Each digit is of 20×20 pixel dimensions. We will split the data such that 250 digits are for training and 250 digits are for testing for every class.

Below is the implementation.




import numpy as np
import cv2
   
      
# Read the image
image = cv2.imread('digits.png')
  
# gray scale conversion
gray_img = cv2.cvtColor(image,
                        cv2.COLOR_BGR2GRAY)
  
# We will divide the image
# into 5000 small dimensions 
# of size 20x20
divisions = list(np.hsplit(i,100) for i in np.vsplit(gray_img,50))
  
# Convert into Numpy array
# of size (50,100,20,20)
NP_array = np.array(divisions)
   
# Preparing train_data
# and test_data.
# Size will be (2500,20x20)
train_data = NP_array[:,:50].reshape(-1,400).astype(np.float32)
  
# Size will be (2500,20x20)
test_data = NP_array[:,50:100].reshape(-1,400).astype(np.float32)
  
# Create 10 different labels 
# for each type of digit
k = np.arange(10)
train_labels = np.repeat(k,250)[:,np.newaxis]
test_labels = np.repeat(k,250)[:,np.newaxis]
   
# Initiate kNN classifier
knn = cv2.ml.KNearest_create()
  
# perform training of data
knn.train(train_data,
          cv2.ml.ROW_SAMPLE, 
          train_labels)
   
# obtain the output from the
# classifier by specifying the
# number of neighbors.
ret, output ,neighbours,
distance = knn.findNearest(test_data, k = 3)
   
# Check the performance and
# accuracy of the classifier.
# Compare the output with test_labels
# to find out how many are wrong.
matched = output==test_labels
correct_OP = np.count_nonzero(matched)
   
#Calculate the accuracy.
accuracy = (correct_OP*100.0)/(output.size)
   
# Display accuracy.
print(accuracy)


Output

91.64


Well, I decided to workout myself on my question to solve the above problem. What I wanted is to implement a simple OCR using KNearest or SVM features in OpenCV. And below is what I did and how. (it is just for learning how to use KNearest for simple OCR purposes).

1) My first question was about letter_recognition.data file that comes with OpenCV samples. I wanted to know what is inside that file.

It contains a letter, along with 16 features of that letter.

And this SOF helped me to find it. These 16 features are explained in the paper Letter Recognition Using Holland-Style Adaptive Classifiers. (Although I didn't understand some of the features at the end)

2) Since I knew, without understanding all those features, it is difficult to do that method. I tried some other papers, but all were a little difficult for a beginner.

So I just decided to take all the pixel values as my features. (I was not worried about accuracy or performance, I just wanted it to work, at least with the least accuracy)

I took the below image for my training data:

enter image description here

(I know the amount of training data is less. But, since all letters are of the same font and size, I decided to try on this).

To prepare the data for training, I made a small code in OpenCV. It does the following things:

  1. It loads the image.
  2. Selects the digits (obviously by contour finding and applying constraints on area and height of letters to avoid false detections).
  3. Draws the bounding rectangle around one letter and wait for key press manually. This time we press the digit key ourselves corresponding to the letter in the box.
  4. Once the corresponding digit key is pressed, it resizes this box to 10x10 and saves all 100 pixel values in an array (here, samples) and corresponding manually entered digit in another array(here, responses).
  5. Then save both the arrays in separate .txt files.

At the end of the manual classification of digits, all the digits in the training data (train.png) are labeled manually by ourselves, image will look like below:

enter image description here

Below is the code I used for the above purpose (of course, not so clean):

import sys

import numpy as np
import cv2

im = cv2.imread('pitrain.png')
im3 = im.copy()

gray = cv2.cvtColor(im,cv2.COLOR_BGR2GRAY)
blur = cv2.GaussianBlur(gray,(5,5),0)
thresh = cv2.adaptiveThreshold(blur,255,1,1,11,2)

#################      Now finding Contours         ###################

contours,hierarchy = cv2.findContours(thresh,cv2.RETR_LIST,cv2.CHAIN_APPROX_SIMPLE)

samples =  np.empty((0,100))
responses = []
keys = [i for i in range(48,58)]

for cnt in contours:
    if cv2.contourArea(cnt)>50:
        [x,y,w,h] = cv2.boundingRect(cnt)
        
        if  h>28:
            cv2.rectangle(im,(x,y),(x+w,y+h),(0,0,255),2)
            roi = thresh[y:y+h,x:x+w]
            roismall = cv2.resize(roi,(10,10))
            cv2.imshow('norm',im)
            key = cv2.waitKey(0)

            if key == 27:  # (escape to quit)
                sys.exit()
            elif key in keys:
                responses.append(int(chr(key)))
                sample = roismall.reshape((1,100))
                samples = np.append(samples,sample,0)

responses = np.array(responses,np.float32)
responses = responses.reshape((responses.size,1))
print "training complete"

np.savetxt('generalsamples.data',samples)
np.savetxt('generalresponses.data',responses)

Now we enter in to training and testing part.

For the testing part, I used the below image, which has the same type of letters I used for the training phase.

enter image description here

For training we do as follows:

  1. Load the .txt files we already saved earlier
  2. create an instance of the classifier we are using (it is KNearest in this case)
  3. Then we use KNearest.train function to train the data

For testing purposes, we do as follows:

  1. We load the image used for testing
  2. process the image as earlier and extract each digit using contour methods
  3. Draw a bounding box for it, then resize it to 10x10, and store its pixel values in an array as done earlier.
  4. Then we use KNearest.find_nearest() function to find the nearest item to the one we gave. ( If lucky, it recognizes the correct digit.)

I included last two steps (training and testing) in single code below:

import cv2
import numpy as np

#######   training part    ############### 
samples = np.loadtxt('generalsamples.data',np.float32)
responses = np.loadtxt('generalresponses.data',np.float32)
responses = responses.reshape((responses.size,1))

model = cv2.KNearest()
model.train(samples,responses)

############################# testing part  #########################

im = cv2.imread('pi.png')
out = np.zeros(im.shape,np.uint8)
gray = cv2.cvtColor(im,cv2.COLOR_BGR2GRAY)
thresh = cv2.adaptiveThreshold(gray,255,1,1,11,2)

contours,hierarchy = cv2.findContours(thresh,cv2.RETR_LIST,cv2.CHAIN_APPROX_SIMPLE)

for cnt in contours:
    if cv2.contourArea(cnt)>50:
        [x,y,w,h] = cv2.boundingRect(cnt)
        if  h>28:
            cv2.rectangle(im,(x,y),(x+w,y+h),(0,255,0),2)
            roi = thresh[y:y+h,x:x+w]
            roismall = cv2.resize(roi,(10,10))
            roismall = roismall.reshape((1,100))
            roismall = np.float32(roismall)
            retval, results, neigh_resp, dists = model.find_nearest(roismall, k = 1)
            string = str(int((results[0][0])))
            cv2.putText(out,string,(x,y+h),0,1,(0,255,0))

cv2.imshow('im',im)
cv2.imshow('out',out)
cv2.waitKey(0)

And it worked, below is the result I got:

enter image description here


Here it worked with 100% accuracy. I assume this is because all the digits are of the same kind and the same size.

But anyway, this is a good start to go for beginners (I hope so).

Split string with delimiters in C

In the above example, there would be a way to return an array of null terminated strings (like you want) in place in the string. It would not make it possible to pass a literal string though, as it would have to be modified by the function:

#include <stdlib.h>
#include <stdio.h>
#include <string.h>

char** str_split( char* str, char delim, int* numSplits )
{
    char** ret;
    int retLen;
    char* c;

    if ( ( str == NULL ) ||
        ( delim == '\0' ) )
    {
        /* Either of those will cause problems */
        ret = NULL;
        retLen = -1;
    }
    else
    {
        retLen = 0;
        c = str;

        /* Pre-calculate number of elements */
        do
        {
            if ( *c == delim )
            {
                retLen++;
            }

            c++;
        } while ( *c != '\0' );

        ret = malloc( ( retLen + 1 ) * sizeof( *ret ) );
        ret[retLen] = NULL;

        c = str;
        retLen = 1;
        ret[0] = str;

        do
        {
            if ( *c == delim )
            {
                ret[retLen++] = &c[1];
                *c = '\0';
            }

            c++;
        } while ( *c != '\0' );
    }

    if ( numSplits != NULL )
    {
        *numSplits = retLen;
    }

    return ret;
}

int main( int argc, char* argv[] )
{
    const char* str = "JAN,FEB,MAR,APR,MAY,JUN,JUL,AUG,SEP,OCT,NOV,DEC";

    char* strCpy;
    char** split;
    int num;
    int i;

    strCpy = malloc( strlen( str ) * sizeof( *strCpy ) );
    strcpy( strCpy, str );

    split = str_split( strCpy, ',', &num );

    if ( split == NULL )
    {
        puts( "str_split returned NULL" );
    }
    else
    {
        printf( "%i Results: \n", num );

        for ( i = 0; i < num; i++ )
        {
            puts( split[i] );
        }
    }

    free( split );
    free( strCpy );

    return 0;
}

There is probably a neater way to do it, but you get the idea.

Bash array with spaces in elements

I used to reset the IFS value and rollback when done.

# backup IFS value
O_IFS=$IFS

# reset IFS value
IFS=""

FILES=(
"2011-09-04 21.43.02.jpg"
"2011-09-05 10.23.14.jpg"
"2011-09-09 12.31.16.jpg"
"2011-09-11 08.43.12.jpg"
)

for file in ${FILES[@]}; do
    echo ${file}
done

# rollback IFS value
IFS=${O_IFS}

Possible output from the loop:

2011-09-04 21.43.02.jpg

2011-09-05 10.23.14.jpg

2011-09-09 12.31.16.jpg

2011-09-11 08.43.12.jpg

TypeError: unhashable type: 'numpy.ndarray'

Your variable energies probably has the wrong shape:

>>> from numpy import array
>>> set([1,2,3]) & set(range(2, 10))
set([2, 3])
>>> set(array([1,2,3])) & set(range(2,10))
set([2, 3])
>>> set(array([[1,2,3],])) & set(range(2,10))
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: unhashable type: 'numpy.ndarray'

And that's what happens if you read columnar data using your approach:

>>> data
array([[  1.,   2.,   3.],
       [  3.,   4.,   5.],
       [  5.,   6.,   7.],
       [  8.,   9.,  10.]])
>>> hsplit(data,3)[0]
array([[ 1.],
       [ 3.],
       [ 5.],
       [ 8.]])

Probably you can simply use

>>> data[:,0]
array([ 1.,  3.,  5.,  8.])

instead.

(P.S. Your code looks like it's undecided about whether it's data or elementdata. I've assumed it's simply a typo.)

How can I split a string with a string delimiter?

.Split(new string[] { "is Marco and" }, StringSplitOptions.None)

Consider the spaces surronding "is Marco and". Do you want to include the spaces in your result, or do you want them removed? It's quite possible that you want to use " is Marco and " as separator...

"Cannot send session cache limiter - headers already sent"

"Headers already sent" means that your PHP script already sent the HTTP headers, and as such it can't make modifications to them now.

Check that you don't send ANY content before calling session_start. Better yet, just make session_start the first thing you do in your PHP file (so put it at the absolute beginning, before all HTML etc).

Why does CSV file contain a blank line in between each data line when outputting with Dictwriter in Python

By default, the classes in the csv module use Windows-style line terminators (\r\n) rather than Unix-style (\n). Could this be what’s causing the apparent double line breaks?

If so, you can override it in the DictWriter constructor:

output = csv.DictWriter(open('file3.csv','w'), delimiter=',', lineterminator='\n', fieldnames=headers)

Printing string variable in Java

You're getting the toString() value returned by the Scanner object itself which is not what you want and not how you use a Scanner object. What you want instead is the data obtained by the Scanner object. For example,

Scanner input = new Scanner(System.in);
String data = input.nextLine();
System.out.println(data);

Please read the tutorial on how to use it as it will explain all.

Edit
Please look here: Scanner tutorial

Also have a look at the Scanner API which will explain some of the finer points of Scanner's methods and properties.

Delimiter must not be alphanumeric or backslash and preg_match

You must specify a delimiter for your expression. A delimiter is a special character used at the start and end of your expression to denote which part is the expression. This allows you to use modifiers and the interpreter to know which is an expression and which are modifiers. As the error message states, the delimiter cannot be a backslash because the backslash is the escape character.

$pattern = "/My name is '(.*)' and im fine/";

and below the same example but with the i modifier to match without being case sensitive.

$pattern = "/My name is '(.*)' and im fine/i";

As you can see, the i is outside of the slashes and therefore is interpreted as a modifier.

Also bear in mind that if you use a forward slash character (/) as a delimiter you must then escape further uses of / in the regular expression, if present.

How do I write data to csv file in columns and rows from a list in python?

Have a go with these code:

>>> import pyexcel as pe
>>> sheet = pe.Sheet(data)
>>> data=[[1, 2], [2, 3], [4, 5]]
>>> sheet
Sheet Name: pyexcel
+---+---+
| 1 | 2 |
+---+---+
| 2 | 3 |
+---+---+
| 4 | 5 |
+---+---+
>>> sheet.save_as("one.csv")
>>> b = [[126, 125, 123, 122, 123, 125, 128, 127, 128, 129, 130, 130, 128, 126, 124, 126, 126, 128, 129, 130, 130, 130, 130, 132, 132, 132, 132, 132, 132, 132, 132, 132, 132, 132, 132, 132, 132, 132, 132, 132, 132, 132, 132, 132, 132, 132, 132, 134, 134, 134, 134, 134, 134, 134, 134, 133, 134, 135, 134, 133, 133, 134, 135, 136], [135, 135, 136, 137, 137, 136, 134, 135, 135, 135, 134, 134, 133, 133, 133, 134, 134, 134, 133, 133, 132, 132, 132, 135, 135, 133, 133, 133, 133, 135, 135, 131, 135, 136, 134, 133, 136, 137, 136, 133, 134, 135, 136, 136, 135, 134, 133, 133, 134, 135, 136, 136, 136, 135, 134, 135, 138, 138, 135, 135, 138, 138, 135, 139], [137, 135, 136, 138, 139, 137, 135, 142, 139, 137, 139, 138, 136, 137, 141, 138, 138, 139, 139, 139, 139, 138, 138, 138, 138, 137, 137, 137, 137, 138, 138, 136, 137, 137, 137, 137, 137, 137, 138, 148, 144, 140, 138, 137, 138, 138, 138, 137, 137, 137, 137, 137, 138, 139, 140, 141, 141, 141, 141, 141, 141, 141, 141, 141], [141, 141, 141, 141, 141, 141, 141, 139, 139, 139, 140, 140, 141, 141, 141, 140, 140, 140, 140, 140, 141, 142, 143, 138, 138, 138, 139, 139, 140, 140, 140, 141, 140, 139, 139, 141, 141, 140, 139, 145, 137, 137, 145, 145, 137, 137, 144, 141, 139, 146, 134, 145, 140, 149, 144, 145, 142, 140, 141, 144, 145, 142, 139, 140]]
>>> s2 = pe.Sheet(b)
>>> s2
Sheet Name: pyexcel
+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+
| 126 | 125 | 123 | 122 | 123 | 125 | 128 | 127 | 128 | 129 | 130 | 130 | 128 | 126 | 124 | 126 | 126 | 128 | 129 | 130 | 130 | 130 | 130 | 132 | 132 | 132 | 132 | 132 | 132 | 132 | 132 | 132 | 132 | 132 | 132 | 132 | 132 | 132 | 132 | 132 | 132 | 132 | 132 | 132 | 132 | 132 | 132 | 134 | 134 | 134 | 134 | 134 | 134 | 134 | 134 | 133 | 134 | 135 | 134 | 133 | 133 | 134 | 135 | 136 |
+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+
| 135 | 135 | 136 | 137 | 137 | 136 | 134 | 135 | 135 | 135 | 134 | 134 | 133 | 133 | 133 | 134 | 134 | 134 | 133 | 133 | 132 | 132 | 132 | 135 | 135 | 133 | 133 | 133 | 133 | 135 | 135 | 131 | 135 | 136 | 134 | 133 | 136 | 137 | 136 | 133 | 134 | 135 | 136 | 136 | 135 | 134 | 133 | 133 | 134 | 135 | 136 | 136 | 136 | 135 | 134 | 135 | 138 | 138 | 135 | 135 | 138 | 138 | 135 | 139 |
+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+
| 137 | 135 | 136 | 138 | 139 | 137 | 135 | 142 | 139 | 137 | 139 | 138 | 136 | 137 | 141 | 138 | 138 | 139 | 139 | 139 | 139 | 138 | 138 | 138 | 138 | 137 | 137 | 137 | 137 | 138 | 138 | 136 | 137 | 137 | 137 | 137 | 137 | 137 | 138 | 148 | 144 | 140 | 138 | 137 | 138 | 138 | 138 | 137 | 137 | 137 | 137 | 137 | 138 | 139 | 140 | 141 | 141 | 141 | 141 | 141 | 141 | 141 | 141 | 141 |
+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+
| 141 | 141 | 141 | 141 | 141 | 141 | 141 | 139 | 139 | 139 | 140 | 140 | 141 | 141 | 141 | 140 | 140 | 140 | 140 | 140 | 141 | 142 | 143 | 138 | 138 | 138 | 139 | 139 | 140 | 140 | 140 | 141 | 140 | 139 | 139 | 141 | 141 | 140 | 139 | 145 | 137 | 137 | 145 | 145 | 137 | 137 | 144 | 141 | 139 | 146 | 134 | 145 | 140 | 149 | 144 | 145 | 142 | 140 | 141 | 144 | 145 | 142 | 139 | 140 |
+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+
>>> s2[0,0]
126
>>> s2.save_as("two.csv")

Ruby: Easiest Way to Filter Hash Keys?

This is a one line to solve the complete original question:

params.select { |k,_| k[/choice/]}.values.join('\t')

But most the solutions above are solving a case where you need to know the keys ahead of time, using slice or simple regexp.

Here is another approach that works for simple and more complex use cases, that is swappable at runtime

data = {}
matcher = ->(key,value) { COMPLEX LOGIC HERE }
data.select(&matcher)

Now not only this allows for more complex logic on matching the keys or the values, but it is also easier to test, and you can swap the matching logic at runtime.

Ex to solve the original issue:

def some_method(hash, matcher) 
  hash.select(&matcher).values.join('\t')
end

params = { :irrelevant => "A String",
           :choice1 => "Oh look, another one",
           :choice2 => "Even more strings",
           :choice3 => "But wait",
           :irrelevant2 => "The last string" }

some_method(params, ->(k,_) { k[/choice/]}) # => "Oh look, another one\\tEven more strings\\tBut wait"
some_method(params, ->(_,v) { v[/string/]}) # => "Even more strings\\tThe last string"

How do I fix 'Invalid character value for cast specification' on a date column in flat file?

I was ultimately able to resolve the solution by setting the column type in the flat file connection to be of type "database date [DT_DBDATE]"

Apparently the differences between these date formats are as follow:

DT_DATE A date structure that consists of year, month, day, and hour.

DT_DBDATE A date structure that consists of year, month, and day.

DT_DBTIMESTAMP A timestamp structure that consists of year, month, hour, minute, second, and fraction

By changing the column type to DT_DBDATE the issue was resolved - I attached a Data Viewer and the CYCLE_DATE value was now simply "12/20/2010" without a time component, which apparently resolved the issue.

How to specify more spaces for the delimiter using cut?

I like to use the tr -s command for this

 ps aux | tr -s [:blank:] | cut -d' ' -f3

This squeezes all white spaces down to 1 space. This way telling cut to use a space as a delimiter is honored as expected.

Split column at delimiter in data frame

@Taesung Shin is right, but then just some more magic to make it into a data.frame. I added a "x|y" line to avoid ambiguities:

df <- data.frame(ID=11:13, FOO=c('a|b','b|c','x|y'))
foo <- data.frame(do.call('rbind', strsplit(as.character(df$FOO),'|',fixed=TRUE)))

Or, if you want to replace the columns in the existing data.frame:

within(df, FOO<-data.frame(do.call('rbind', strsplit(as.character(FOO), '|', fixed=TRUE))))

Which produces:

  ID FOO.X1 FOO.X2
1 11      a      b
2 12      b      c
3 13      x      y

String delimiter in string.split method

String[] splitArray = subjectString.split("\\|\\|");

You use a function:

public String[] stringSplit(String string){

    String[] splitArray = string.split("\\|\\|");
    return splitArray;
}

Splitting on first occurrence

From the docs:

str.split([sep[, maxsplit]])

Return a list of the words in the string, using sep as the delimiter string. If maxsplit is given, at most maxsplit splits are done (thus, the list will have at most maxsplit+1 elements).

s.split('mango', 1)[1]

Use String.split() with multiple delimiters

pdfName.split("[.-]+");

  • [.-] -> any one of the . or - can be used as delimiter

  • + sign signifies that if the aforementioned delimiters occur consecutively we should treat it as one.

Right way to split an std::string into a vector<string>

vector<string> split(string str, string token){
    vector<string>result;
    while(str.size()){
        int index = str.find(token);
        if(index!=string::npos){
            result.push_back(str.substr(0,index));
            str = str.substr(index+token.size());
            if(str.size()==0)result.push_back(str);
        }else{
            result.push_back(str);
            str = "";
        }
    }
    return result;
}

split("1,2,3",",") ==> ["1","2","3"]

split("1,2,",",") ==> ["1","2",""]

split("1token2token3","token") ==> ["1","2","3"]

Spaces in URLs?

The information there is I think partially correct:

That's not true. An URL can use spaces. Nothing defines that a space is replaced with a + sign.

As you noted, an URL can NOT use spaces. The HTTP request would get screwed over. I'm not sure where the + is defined, though %20 is standard.

PHP: Split string

explode does the job:

$parts = explode('.', $string);

You can also directly fetch parts of the result into variables:

list($part1, $part2) = explode('.', $string);

For loop example in MySQL

Assume you have one table with name 'table1'. It contain one column 'col1' with varchar type. Query to crate table is give below

CREATE TABLE `table1` (
    `col1` VARCHAR(50) NULL DEFAULT NULL
)

Now if you want to insert number from 1 to 50 in that table then use following stored procedure

DELIMITER $$  
CREATE PROCEDURE ABC()

   BEGIN
      DECLARE a INT Default 1 ;
      simple_loop: LOOP         
         insert into table1 values(a);
         SET a=a+1;
         IF a=51 THEN
            LEAVE simple_loop;
         END IF;
   END LOOP simple_loop;
END $$

To call that stored procedure use

CALL `ABC`()

How to split string using delimiter char using T-SQL?

For your specific data, you can use

Select col1, col2, LTRIM(RTRIM(SUBSTRING(
    STUFF(col3, CHARINDEX('|', col3,
    PATINDEX('%|Client Name =%', col3) + 14), 1000, ''),
    PATINDEX('%|Client Name =%', col3) + 14, 1000))) col3
from Table01

EDIT - charindex vs patindex

Test

select col3='Clent ID = 4356hy|Client Name = B B BOB|Client Phone = 667-444-2626|Client Fax = 666-666-0151|Info = INF8888877 -MAC333330554/444400800'
into t1m
from master..spt_values a
cross join master..spt_values b
where a.number < 100
-- (711704 row(s) affected)

set statistics time on

dbcc dropcleanbuffers
dbcc freeproccache
select a=CHARINDEX('|Client Name =', col3) into #tmp1 from t1m
drop table #tmp1

dbcc dropcleanbuffers
dbcc freeproccache
select a=PATINDEX('%|Client Name =%', col3) into #tmp2 from t1m
drop table #tmp2

set statistics time off

Timings

CHARINDEX:

 SQL Server Execution Times (1):
   CPU time = 5656 ms,  elapsed time = 6418 ms.
 SQL Server Execution Times (2):
   CPU time = 5813 ms,  elapsed time = 6114 ms.
 SQL Server Execution Times (3):
   CPU time = 5672 ms,  elapsed time = 6108 ms.

PATINDEX:

 SQL Server Execution Times (1):
   CPU time = 5906 ms,  elapsed time = 6296 ms.
 SQL Server Execution Times (2):
   CPU time = 5860 ms,  elapsed time = 6404 ms.
 SQL Server Execution Times (3):
   CPU time = 6109 ms,  elapsed time = 6301 ms.

Conclusion

The timings for CharIndex and PatIndex for 700k calls are within 3.5% of each other, so I don't think it would matter whichever is used. I use them interchangeably when both can work.

Split string with multiple delimiters in Python

This is how the regex look like:

import re
# "semicolon or (a comma followed by a space)"
pattern = re.compile(r";|, ")

# "(semicolon or a comma) followed by a space"
pattern = re.compile(r"[;,] ")

print pattern.split(text)

invalid byte sequence for encoding "UTF8"

copy tablename from 'filepath\filename' DELIMITERS '=' ENCODING 'WIN1252';

you can try this to handle UTF8 encoding.

Splitting strings using a delimiter in python

So, your input is 'dan|warrior|54' and you want "warrior". You do this like so:

>>> dan = 'dan|warrior|54'
>>> dan.split('|')[1]
"warrior"

PHP regular expressions: No ending delimiter '^' found in

You can use T-Regx library, that doesn't need delimiters

pattern('^([0-9]+)$')->match($input);

Is there a way to include commas in CSV columns without breaking the formatting?

Enclose the field in quotes, e.g.

field1_value,field2_value,"field 3,value",field4, etc...

See wikipedia.

Updated:

To encode a quote, use ", one double quote symbol in a field will be encoded as "", and the whole field will become """". So if you see the following in e.g. Excel:

---------------------------------------
| regular_value |,,,"|  ,"", |"""   |"|
---------------------------------------

the CSV file will contain:

regular_value,",,,""",","""",","""""""",""""

A comma is simply encapsulated using quotes, so , becomes ",".

A comma and quote needs to be encapsulated and quoted, so "," becomes """,""".

load csv into 2D matrix with numpy for plotting

Pure numpy

numpy.loadtxt(open("test.csv", "rb"), delimiter=",", skiprows=1)

Check out the loadtxt documentation.

You can also use python's csv module:

import csv
import numpy
reader = csv.reader(open("test.csv", "rb"), delimiter=",")
x = list(reader)
result = numpy.array(x).astype("float")

You will have to convert it to your favorite numeric type. I guess you can write the whole thing in one line:

result = numpy.array(list(csv.reader(open("test.csv", "rb"), delimiter=","))).astype("float")

Added Hint:

You could also use pandas.io.parsers.read_csv and get the associated numpy array which can be faster.

How to make the 'cut' command treat same sequental delimiters as one?

As you comment in your question, awk is really the way to go. To use cut is possible together with tr -s to squeeze spaces, as kev's answer shows.

Let me however go through all the possible combinations for future readers. Explanations are at the Test section.

tr | cut

tr -s ' ' < file | cut -d' ' -f4

awk

awk '{print $4}' file

bash

while read -r _ _ _ myfield _
do
   echo "forth field: $myfield"
done < file

sed

sed -r 's/^([^ ]*[ ]*){3}([^ ]*).*/\2/' file

Tests

Given this file, let's test the commands:

$ cat a
this   is    line     1 more text
this      is line    2     more text
this    is line 3     more text
this is   line 4            more    text

tr | cut

$ cut -d' ' -f4 a
is
                        # it does not show what we want!


$ tr -s ' ' < a | cut -d' ' -f4
1
2                       # this makes it!
3
4
$

awk

$ awk '{print $4}' a
1
2
3
4

bash

This reads the fields sequentially. By using _ we indicate that this is a throwaway variable as a "junk variable" to ignore these fields. This way, we store $myfield as the 4th field in the file, no matter the spaces in between them.

$ while read -r _ _ _ a _; do echo "4th field: $a"; done < a
4th field: 1
4th field: 2
4th field: 3
4th field: 4

sed

This catches three groups of spaces and no spaces with ([^ ]*[ ]*){3}. Then, it catches whatever coming until a space as the 4th field, that it is finally printed with \1.

$ sed -r 's/^([^ ]*[ ]*){3}([^ ]*).*/\2/' a
1
2
3
4

Does Hive have a String split function?

Just a clarification on the answer given by Bkkbrad.

I tried this suggestion and it did not work for me.

For example,

split('aa|bb','\\|')

produced:

["","a","a","|","b","b",""]

But,

split('aa|bb','[|]')

produced the desired result:

["aa","bb"]

Including the metacharacter '|' inside the square brackets causes it to be interpreted literally, as intended, rather than as a metacharacter.

For elaboration of this behaviour of regexp, see: http://www.regular-expressions.info/charclass.html

Join String list elements with a delimiter in one step

If you are using Spring you can use StringUtils.join() method which also allows you to specify prefix and suffix.

String s = StringUtils.collectionToDelimitedString(fieldRoles.keySet(),
                "\n", "<value>", "</value>");

How to call a MySQL stored procedure from within PHP code?

I now found solution by using mysqli instead of mysql.

<?php 

// enable error reporting
mysqli_report(MYSQLI_REPORT_ERROR | MYSQLI_REPORT_STRICT);
//connect to database
$connection = mysqli_connect("hostname", "user", "password", "db", "port");

//run the store proc
$result = mysqli_query($connection, "CALL StoreProcName");

//loop the result set
while ($row = mysqli_fetch_array($result)){     
  echo $row[0] . " - " . + $row[1]; 
}

I found that many people seem to have a problem with using mysql_connect, mysql_query and mysql_fetch_array.

What is for Python what 'explode' is for PHP?

The alternative for explode in php is split.

The first parameter is the delimiter, the second parameter the maximum number splits. The parts are returned without the delimiter present (except possibly the last part). When the delimiter is None, all whitespace is matched. This is the default.

>>> "Rajasekar SP".split()
['Rajasekar', 'SP']

>>> "Rajasekar SP".split('a',2)
['R','j','sekar SP']

Is it safe to use Project Lombok?

I have encountered a problem with Lombok and Jackson CSV, when I marshalized my object (java bean) to a CSV file, columns where duplicated, then I removed Lombok's @Data annotation and marshalizing worked fine.

Android Split string

You might also want to consider the Android specific TextUtils.split() method.

The difference between TextUtils.split() and String.split() is documented with TextUtils.split():

String.split() returns [''] when the string to be split is empty. This returns []. This does not remove any empty strings from the result.

I find this a more natural behavior. In essence TextUtils.split() is just a thin wrapper for String.split(), dealing specifically with the empty-string case. The code for the method is actually quite simple.

C# List<string> to string with delimiter

You can use String.Join. If you have a List<string> then you can call ToArray first:

List<string> names = new List<string>() { "John", "Anna", "Monica" };
var result = String.Join(", ", names.ToArray());

In .NET 4 you don't need the ToArray anymore, since there is an overload of String.Join that takes an IEnumerable<string>.

Results:


John, Anna, Monica

Split a string by a delimiter in python

You can use the str.split method: string.split('__')

>>> "MATCHES__STRING".split("__")
['MATCHES', 'STRING']

Remove last character of a StringBuilder?

stringBuilder.Remove(stringBuilder.Length - 1, 1);

CSV in Python adding an extra carriage return, on Windows

Python 3:

The official csv documentation recommends opening the file with newline='' on all platforms to disable universal newlines translation:

with open('output.csv', 'w', newline='', encoding='utf-8') as f:
    writer = csv.writer(f)
    ...

The CSV writer terminates each line with the lineterminator of the dialect, which is \r\n for the default excel dialect on all platforms.


Python 2:

On Windows, always open your files in binary mode ("rb" or "wb"), before passing them to csv.reader or csv.writer.

Although the file is a text file, CSV is regarded a binary format by the libraries involved, with \r\n separating records. If that separator is written in text mode, the Python runtime replaces the \n with \r\n, hence the \r\r\n observed in the file.

See this previous answer.

How to write character & in android strings.xml

You can write in this way

<string name="you_me">You &#38; Me<string>

Output: You & Me

How to write header row with csv.DictWriter?

Another way to do this would be to add before adding lines in your output, the following line :

output.writerow(dict(zip(dr.fieldnames, dr.fieldnames)))

The zip would return a list of doublet containing the same value. This list could be used to initiate a dictionary.

Why can't Python parse this JSON data?

Here you go with modified data.json file:

{
    "maps": [
        {
            "id": "blabla",
            "iscategorical": "0"
        },
        {
            "id": "blabla",
            "iscategorical": "0"
        }
    ],
    "masks": [{
        "id": "valore"
    }],
    "om_points": "value",
    "parameters": [{
        "id": "valore"
    }]
}

You can call or print data on console by using below lines:

import json
from pprint import pprint
with open('data.json') as data_file:
    data_item = json.load(data_file)
pprint(data_item)

Expected output for print(data_item['parameters'][0]['id']):

{'maps': [{'id': 'blabla', 'iscategorical': '0'},
          {'id': 'blabla', 'iscategorical': '0'}],
 'masks': [{'id': 'valore'}],
 'om_points': 'value',
 'parameters': [{'id': 'valore'}]}

Expected output for print(data_item['parameters'][0]['id']):

valore

How to join multiple lines of file names into one with custom delimiter?

You can use:

ls -1 | perl -pe 's/\n$/some_delimiter/'

How can I use "." as the delimiter with String.split() in java

You might be interested in the StringTokenizer class. However, the java docs advise that you use the .split method as StringTokenizer is a legacy class.

Importing CSV with line breaks in Excel 2007

Use Google Sheets and import the CSV file.

Then you can export that to use in Excel

Split a string into an array of strings based on a delimiter

Explode is very high speed function, source alhoritm get from TStrings component. I use next test for explode: Explode 134217733 bytes of data, i get 19173962 elements, time of work: 2984 ms.

Implode is very low speed function, but i write it easy.

{ ****************************************************************************** }
{  Explode/Implode (String <> String array)                                      }
{ ****************************************************************************** }
function Explode(S: String; Delimiter: Char): Strings; overload;
var I, C: Integer; P, P1: PChar;
begin
    SetLength(Result, 0);
    if Length(S) = 0 then Exit;
    P:=PChar(S+Delimiter); C:=0;
    while P^ <> #0 do begin
       P1:=P;
       while (P^ <> Delimiter) do P:=CharNext(P);
       Inc(C);
       while P^ in [#1..' '] do P:=CharNext(P);
       if P^ = Delimiter then begin
          repeat
           P:=CharNext(P);
          until not (P^ in [#1..' ']);
       end;
    end;
    SetLength(Result, C);
    P:=PChar(S+Delimiter); I:=-1;
    while P^ <> #0 do begin
       P1:=P;
       while (P^ <> Delimiter) do P:=CharNext(P);
       Inc(I); SetString(Result[I], P1, P-P1);
       while P^ in [#1..' '] do P:=CharNext(P);
       if P^ = Delimiter then begin
          repeat
           P:=CharNext(P);
          until not (P^ in [#1..' ']);
       end;
    end;
end;

function Explode(S: String; Delimiter: Char; Index: Integer): String; overload;
var I: Integer; P, P1: PChar;
begin
    if Length(S) = 0 then Exit;
    P:=PChar(S+Delimiter); I:=1;
    while P^ <> #0 do begin
       P1:=P;
       while (P^ <> Delimiter) do P:=CharNext(P);
        SetString(Result, P1, P-P1);
        if (I <> Index) then Inc(I) else begin
           SetString(Result, P1, P-P1); Exit;
        end;
       while P^ in [#1..' '] do P:=CharNext(P);
       if P^ = Delimiter then begin
          repeat
           P:=CharNext(P);
          until not (P^ in [#1..' ']);
       end;
    end;
end;

function Implode(S: Strings; Delimiter: Char): String;
var iCount: Integer;
begin
     Result:='';
     if (Length(S) = 0) then Exit;
     for iCount:=0 to Length(S)-1 do
     Result:=Result+S[iCount]+Delimiter;
     System.Delete(Result, Length(Result), 1);
end;

How to split a string to 2 strings in C

char *line = strdup("user name"); // don't do char *line = "user name"; see Note

char *first_part = strtok(line, " "); //first_part points to "user"
char *sec_part = strtok(NULL, " ");   //sec_part points to "name"

Note: strtok modifies the string, so don't hand it a pointer to string literal.

How to remove extension from string (only real extension!)

I found many examples on the Google but there are bad because just remove part of string with "."

Actually that is absolutely the correct thing to do. Go ahead and use that.

The file extension is everything after the last dot, and there is no requirement for a file extension to be any particular number of characters. Even talking only about Windows, it already comes with file extensions that don't fit 3-4 characters, such as eg. .manifest.

How to split a string, but also keep the delimiters?

Tweaked Pattern.split() to include matched pattern to the list

Added

// add match to the list
        matchList.add(input.subSequence(start, end).toString());

Full source

public static String[] inclusiveSplit(String input, String re, int limit) {
    int index = 0;
    boolean matchLimited = limit > 0;
    ArrayList<String> matchList = new ArrayList<String>();

    Pattern pattern = Pattern.compile(re);
    Matcher m = pattern.matcher(input);

    // Add segments before each match found
    while (m.find()) {
        int end = m.end();
        if (!matchLimited || matchList.size() < limit - 1) {
            int start = m.start();
            String match = input.subSequence(index, start).toString();
            matchList.add(match);
            // add match to the list
            matchList.add(input.subSequence(start, end).toString());
            index = end;
        } else if (matchList.size() == limit - 1) { // last one
            String match = input.subSequence(index, input.length())
                    .toString();
            matchList.add(match);
            index = end;
        }
    }

    // If no match was found, return this
    if (index == 0)
        return new String[] { input.toString() };

    // Add remaining segment
    if (!matchLimited || matchList.size() < limit)
        matchList.add(input.subSequence(index, input.length()).toString());

    // Construct result
    int resultSize = matchList.size();
    if (limit == 0)
        while (resultSize > 0 && matchList.get(resultSize - 1).equals(""))
            resultSize--;
    String[] result = new String[resultSize];
    return matchList.subList(0, resultSize).toArray(result);
}

PHP: Split string into array, like explode with no delimiter

$array = str_split("$string");

will actuall work pretty fine, BUT if you want to preserve the special characters in that string, and you want to do some manipulation with them, THAN I would use

do {
    $array[] = mb_substr( $string, 0, 1, 'utf-8' );
} while ( $string = mb_substr( $string, 1, mb_strlen( $string ), 'utf-8' ) );

because for some of mine personal uses, it has been shown to be more reliable when there is an issue with special characters

sort csv by column

To sort by MULTIPLE COLUMN (Sort by column_1, and then sort by column_2)

with open('unsorted.csv',newline='') as csvfile:
    spamreader = csv.DictReader(csvfile, delimiter=";")
    sortedlist = sorted(spamreader, key=lambda row:(row['column_1'],row['column_2']), reverse=False)


with open('sorted.csv', 'w') as f:
    fieldnames = ['column_1', 'column_2', column_3]
    writer = csv.DictWriter(f, fieldnames=fieldnames)
    writer.writeheader()
    for row in sortedlist:
        writer.writerow(row)

Cannot find either column "dbo" or the user-defined function or aggregate "dbo.Splitfn", or the name is ambiguous

Since people will be coming from Google, make sure you're in the right database.

Running SQL in the 'master' database will often return this error.

Create a .csv file with values from a Python list

To create and write into a csv file

The below example demonstrate creating and writing a csv file. to make a dynamic file writer we need to import a package import csv, then need to create an instance of the file with file reference Ex:- with open("D:\sample.csv","w",newline="") as file_writer

here if the file does not exist with the mentioned file directory then python will create a same file in the specified directory, and "w" represents write, if you want to read a file then replace "w" with "r" or to append to existing file then "a". newline="" specifies that it removes an extra empty row for every time you create row so to eliminate empty row we use newline="", create some field names(column names) using list like fields=["Names","Age","Class"], then apply to writer instance like writer=csv.DictWriter(file_writer,fieldnames=fields) here using Dictionary writer and assigning column names, to write column names to csv we use writer.writeheader() and to write values we use writer.writerow({"Names":"John","Age":20,"Class":"12A"}) ,while writing file values must be passed using dictionary method , here the key is column name and value is your respective key value

import csv 

with open("D:\\sample.csv","w",newline="") as file_writer:

   fields=["Names","Age","Class"]

   writer=csv.DictWriter(file_writer,fieldnames=fields)

   writer.writeheader()

   writer.writerow({"Names":"John","Age":21,"Class":"12A"})

Copy/Paste from Excel to a web page

Maybe it would be better if you would read your excel file from PHP, and then either save it to a DB or do some processing on it.

here an in-dept tutorial on how to read and write Excel data with PHP:
http://www.ibm.com/developerworks/opensource/library/os-phpexcel/index.html

String parsing in Java with delimiter tab "\t" using split

String.split uses Regular Expressions, also you don't need to allocate an extra array for your split.

The split-method will give you a list., the problem is that you try to pre-define how many occurrences you have of a tab, but how would you Really know that? Try using the Scanner or StringTokenizer and just learn how splitting strings work.

Let me explain Why \t does not work and why you need \\\\ to escape \\.

Okay, so when you use Split, it actually takes a regex ( Regular Expression ) and in regular expression you want to define what Character to split by, and if you write \t that actually doesn't mean \t and what you WANT to split by is \t, right? So, by just writing \t you tell your regex-processor that "Hey split by the character that is escaped t" NOT "Hey split by all characters looking like \t". Notice the difference? Using \ means to escape something. And \ in regex means something Totally different than what you think.

So this is why you need to use this Solution:

\\t

To tell the regex processor to look for \t. Okay, so why would you need two of em? Well, the first \ escapes the second, which means it will look like this: \t when you are processing the text!

Now let's say that you are looking to split \

Well then you would be left with \\ but see, that doesn't Work! because \ will try to escape the previous char! That is why you want the Output to be \\ and therefore you need to have \\\\.

I really hope the examples above helps you understand why your solution doesn't work and how to conquer other ones!

Now, I've given you this answer before, maybe you should start looking at them now.

OTHER METHODS

StringTokenizer

You should look into the StringTokenizer, it's a very handy tool for this type of work.

Example

 StringTokenizer st = new StringTokenizer("this is a test");
 while (st.hasMoreTokens()) {
     System.out.println(st.nextToken());
 }

This will output

 this
 is
 a
 test

You use the Second Constructor for StringTokenizer to set the delimiter:

StringTokenizer(String str, String delim)

Scanner

You could also use a Scanner as one of the commentators said this could look somewhat like this

Example

 String input = "1 fish 2 fish red fish blue fish";

 Scanner s = new Scanner(input).useDelimiter("\\s*fish\\s*");

 System.out.println(s.nextInt());
 System.out.println(s.nextInt());
 System.out.println(s.next());
 System.out.println(s.next());

 s.close(); 

The output would be

 1
 2
 red
 blue 

Meaning that it will cut out the word "fish" and give you the rest, using "fish" as the delimiter.

examples taken from the Java API

How long will my session last?

You're searching for gc_maxlifetime, see http://php.net/manual/en/session.configuration.php#ini.session.gc-maxlifetime for a description.

Your session will last 1440 seconds which is 24 minutes (default).

Regular Expression to find a string included between two characters while EXCLUDING the delimiters

This one specifically works for javascript's regular expression parser /[^[\]]+(?=])/g

just run this in the console

var regex = /[^[\]]+(?=])/g;
var str = "This is a test string [more or less]";
var match = regex.exec(str);
match;

linux shell script: split string, put them in an array then loop through them

You can probably skip the step of explicitly creating an array...

One trick that I like to use is to set the inter-field separator (IFS) to the delimiter character. This is especially handy for iterating through the space or return delimited results from the stdout of any of a number of unix commands.

Below is an example using semicolons (as you had mentioned in your question):

export IFS=";"
sentence="one;two;three"
for word in $sentence; do
  echo "$word"
done

Note: in regular Bourne-shell scripting setting and exporting the IFS would occur on two separate lines (IFS='x'; export IFS;).

Read entire file in Scala?

For emulating Ruby syntax (and convey the semantics) of opening and reading a file, consider this implicit class (Scala 2.10 and upper),

import java.io.File

def open(filename: String) = new File(filename)

implicit class RichFile(val file: File) extends AnyVal {
  def read = io.Source.fromFile(file).getLines.mkString("\n")
}

In this way,

open("file.txt").read

string.split - by multiple character delimiter

string tests = "abc][rfd][5][,][.";
string[] reslts = tests.Split(new char[] { ']', '[' }, StringSplitOptions.RemoveEmptyEntries);

How do I split a string by a multi-character delimiter in C#?

string s = "This is a sentence.";
string[] res = s.Split(new string[]{ " is " }, StringSplitOptions.None);

for(int i=0; i<res.length; i++)
    Console.Write(res[i]);

EDIT: The "is" is padded on both sides with spaces in the array in order to preserve the fact that you only want the word "is" removed from the sentence and the word "this" to remain intact.

How to export table as CSV with headings on Postgresql?

When I don't have permission to write a file out from Postgres I find that I can run the query from the command line.

psql -U user -d db_name -c "Copy (Select * From foo_table LIMIT 10) To STDOUT With CSV HEADER DELIMITER ',';" > foo_data.csv

mysql stored-procedure: out parameter

try changing OUT to INOUT for your out_number parameter definition.

CREATE PROCEDURE my_sqrt(input_number INT, INOUT out_number FLOAT)

INOUT means that the input variable for out_number (@out_value in your case.) will also serve as the output variable from which you can select the value from.

Split Strings into words with multiple word boundary delimiters

re.split()

re.split(pattern, string[, maxsplit=0])

Split string by the occurrences of pattern. If capturing parentheses are used in pattern, then the text of all groups in the pattern are also returned as part of the resulting list. If maxsplit is nonzero, at most maxsplit splits occur, and the remainder of the string is returned as the final element of the list. (Incompatibility note: in the original Python 1.5 release, maxsplit was ignored. This has been fixed in later releases.)

>>> re.split('\W+', 'Words, words, words.')
['Words', 'words', 'words', '']
>>> re.split('(\W+)', 'Words, words, words.')
['Words', ', ', 'words', ', ', 'words', '.', '']
>>> re.split('\W+', 'Words, words, words.', 1)
['Words', 'words, words.']

Sorting a tab delimited file

Using bash, this will do the trick:

$ sort -t$'\t' -k3 -nr file.txt

Notice the dollar sign in front of the single-quoted string. You can read about it in the ANSI-C Quoting sections of the bash man page.

add column to mysql table if it does not exist

If you are on MariaDB, no need to use stored procedures. Just use, for example:

ALTER TABLE table_name ADD COLUMN IF NOT EXISTS column_name tinyint(1) DEFAULT 0;

See here

How do I split a string on a delimiter in Bash?

Okay guys!

Here's my answer!

DELIMITER_VAL='='

read -d '' F_ABOUT_DISTRO_R <<"EOF"
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=14.04
DISTRIB_CODENAME=trusty
DISTRIB_DESCRIPTION="Ubuntu 14.04.4 LTS"
NAME="Ubuntu"
VERSION="14.04.4 LTS, Trusty Tahr"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 14.04.4 LTS"
VERSION_ID="14.04"
HOME_URL="http://www.ubuntu.com/"
SUPPORT_URL="http://help.ubuntu.com/"
BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"
EOF

SPLIT_NOW=$(awk -F$DELIMITER_VAL '{for(i=1;i<=NF;i++){printf "%s\n", $i}}' <<<"${F_ABOUT_DISTRO_R}")
while read -r line; do
   SPLIT+=("$line")
done <<< "$SPLIT_NOW"
for i in "${SPLIT[@]}"; do
    echo "$i"
done

Why this approach is "the best" for me?

Because of two reasons:

  1. You do not need to escape the delimiter;
  2. You will not have problem with blank spaces. The value will be properly separated in the array!

[]'s

CSV parsing in Java - working example..?

Writing your own parser is fun, but likely you should have a look at Open CSV. It provides numerous ways of accessing the CSV and also allows to generate CSV. And it does handle escapes properly. As mentioned in another post, there is also a CSV-parsing lib in the Apache Commons, but that one isn't released yet.

Use space as a delimiter with cut command

You can't do it easily with cut if the data has for example multiple spaces. I have found it useful to normalize input for easier processing. One trick is to use sed for normalization as below.

echo -e "foor\t \t bar" | sed 's:\s\+:\t:g' | cut -f2  #bar

Double.TryParse or Convert.ToDouble - which is faster and safer?

I have always preferred using the TryParse() methods because it is going to spit back success or failure to convert without having to worry about exceptions.

Concat all strings inside a List<string> using LINQ

I have done this using LINQ:

var oCSP = (from P in db.Products select new { P.ProductName });

string joinedString = string.Join(",", oCSP.Select(p => p.ProductName));

Split List into Sublists with LINQ

In general the approach suggested by CaseyB works fine, in fact if you are passing in a List<T> it is hard to fault it, perhaps I would change it to:

public static IEnumerable<IEnumerable<T>> ChunkTrivialBetter<T>(this IEnumerable<T> source, int chunksize)
{
   var pos = 0; 
   while (source.Skip(pos).Any())
   {
      yield return source.Skip(pos).Take(chunksize);
      pos += chunksize;
   }
}

Which will avoid massive call chains. Nonetheless, this approach has a general flaw. It materializes two enumerations per chunk, to highlight the issue try running:

foreach (var item in Enumerable.Range(1, int.MaxValue).Chunk(8).Skip(100000).First())
{
   Console.WriteLine(item);
}
// wait forever 

To overcome this we can try Cameron's approach, which passes the above test in flying colors as it only walks the enumeration once.

Trouble is that it has a different flaw, it materializes every item in each chunk, the trouble with that approach is that you run high on memory.

To illustrate that try running:

foreach (var item in Enumerable.Range(1, int.MaxValue)
               .Select(x => x + new string('x', 100000))
               .Clump(10000).Skip(100).First())
{
   Console.Write('.');
}
// OutOfMemoryException

Finally, any implementation should be able to handle out of order iteration of chunks, for example:

Enumerable.Range(1,3).Chunk(2).Reverse().ToArray()
// should return [3],[1,2]

Many highly optimal solutions like my first revision of this answer failed there. The same issue can be seen in casperOne's optimized answer.

To address all these issues you can use the following:

namespace ChunkedEnumerator
{
    public static class Extensions 
    {
        class ChunkedEnumerable<T> : IEnumerable<T>
        {
            class ChildEnumerator : IEnumerator<T>
            {
                ChunkedEnumerable<T> parent;
                int position;
                bool done = false;
                T current;


                public ChildEnumerator(ChunkedEnumerable<T> parent)
                {
                    this.parent = parent;
                    position = -1;
                    parent.wrapper.AddRef();
                }

                public T Current
                {
                    get
                    {
                        if (position == -1 || done)
                        {
                            throw new InvalidOperationException();
                        }
                        return current;

                    }
                }

                public void Dispose()
                {
                    if (!done)
                    {
                        done = true;
                        parent.wrapper.RemoveRef();
                    }
                }

                object System.Collections.IEnumerator.Current
                {
                    get { return Current; }
                }

                public bool MoveNext()
                {
                    position++;

                    if (position + 1 > parent.chunkSize)
                    {
                        done = true;
                    }

                    if (!done)
                    {
                        done = !parent.wrapper.Get(position + parent.start, out current);
                    }

                    return !done;

                }

                public void Reset()
                {
                    // per http://msdn.microsoft.com/en-us/library/system.collections.ienumerator.reset.aspx
                    throw new NotSupportedException();
                }
            }

            EnumeratorWrapper<T> wrapper;
            int chunkSize;
            int start;

            public ChunkedEnumerable(EnumeratorWrapper<T> wrapper, int chunkSize, int start)
            {
                this.wrapper = wrapper;
                this.chunkSize = chunkSize;
                this.start = start;
            }

            public IEnumerator<T> GetEnumerator()
            {
                return new ChildEnumerator(this);
            }

            System.Collections.IEnumerator System.Collections.IEnumerable.GetEnumerator()
            {
                return GetEnumerator();
            }

        }

        class EnumeratorWrapper<T>
        {
            public EnumeratorWrapper (IEnumerable<T> source)
            {
                SourceEumerable = source;
            }
            IEnumerable<T> SourceEumerable {get; set;}

            Enumeration currentEnumeration;

            class Enumeration
            {
                public IEnumerator<T> Source { get; set; }
                public int Position { get; set; }
                public bool AtEnd { get; set; }
            }

            public bool Get(int pos, out T item) 
            {

                if (currentEnumeration != null && currentEnumeration.Position > pos)
                {
                    currentEnumeration.Source.Dispose();
                    currentEnumeration = null;
                }

                if (currentEnumeration == null)
                {
                    currentEnumeration = new Enumeration { Position = -1, Source = SourceEumerable.GetEnumerator(), AtEnd = false };
                }

                item = default(T);
                if (currentEnumeration.AtEnd)
                {
                    return false;
                }

                while(currentEnumeration.Position < pos) 
                {
                    currentEnumeration.AtEnd = !currentEnumeration.Source.MoveNext();
                    currentEnumeration.Position++;

                    if (currentEnumeration.AtEnd) 
                    {
                        return false;
                    }

                }

                item = currentEnumeration.Source.Current;

                return true;
            }

            int refs = 0;

            // needed for dispose semantics 
            public void AddRef()
            {
                refs++;
            }

            public void RemoveRef()
            {
                refs--;
                if (refs == 0 && currentEnumeration != null)
                {
                    var copy = currentEnumeration;
                    currentEnumeration = null;
                    copy.Source.Dispose();
                }
            }
        }

        public static IEnumerable<IEnumerable<T>> Chunk<T>(this IEnumerable<T> source, int chunksize)
        {
            if (chunksize < 1) throw new InvalidOperationException();

            var wrapper =  new EnumeratorWrapper<T>(source);

            int currentPos = 0;
            T ignore;
            try
            {
                wrapper.AddRef();
                while (wrapper.Get(currentPos, out ignore))
                {
                    yield return new ChunkedEnumerable<T>(wrapper, chunksize, currentPos);
                    currentPos += chunksize;
                }
            }
            finally
            {
                wrapper.RemoveRef();
            }
        }
    }

    class Program
    {
        static void Main(string[] args)
        {
            int i = 10;
            foreach (var group in Enumerable.Range(1, int.MaxValue).Skip(10000000).Chunk(3))
            {
                foreach (var n in group)
                {
                    Console.Write(n);
                    Console.Write(" ");
                }
                Console.WriteLine();
                if (i-- == 0) break;
            }


            var stuffs = Enumerable.Range(1, 10).Chunk(2).ToArray();

            foreach (var idx in new [] {3,2,1})
            {
                Console.Write("idx " + idx + " ");
                foreach (var n in stuffs[idx])
                {
                    Console.Write(n);
                    Console.Write(" ");
                }
                Console.WriteLine();
            }

            /*

10000001 10000002 10000003
10000004 10000005 10000006
10000007 10000008 10000009
10000010 10000011 10000012
10000013 10000014 10000015
10000016 10000017 10000018
10000019 10000020 10000021
10000022 10000023 10000024
10000025 10000026 10000027
10000028 10000029 10000030
10000031 10000032 10000033
idx 3 7 8
idx 2 5 6
idx 1 3 4
             */

            Console.ReadKey();


        }

    }
}

There is also a round of optimisations you could introduce for out-of-order iteration of chunks, which is out of scope here.

As to which method you should choose? It totally depends on the problem you are trying to solve. If you are not concerned with the first flaw the simple answer is incredibly appealing.

Note as with most methods, this is not safe for multi threading, stuff can get weird if you wish to make it thread safe you would need to amend EnumeratorWrapper.

Sorting multiple keys with Unix sort

The -k option is what you want.

-k 1.4,1.5n -k 1.14,1.15n

Would use character positions 4-5 in the first field (it's all one field for fixed width) and sort numerically as the first key.

The second key would be characters 14-15 in the first field also.

(edit)

Example (all I have is DOS/cygwin handy):

dir | \cygwin\bin\sort.exe -k 1.4,1.5n -k 1.40,1.60r

for the data:

12/10/2008  01:10 PM         1,564,990 outfile.txt

Sorts the directory listing by month number (pos 4-5) numerically, and then by filename (pos 40-60) in reverse. Since there are no tabs, it's all field 1 to sort.

Tokenizing strings in C

strtok can be very dangerous. It is not thread safe. Its intended use is to be called over and over in a loop, passing in the output from the previous call. The strtok function has an internal variable that stores the state of the strtok call. This state is not unique to each thread - it is global. If any other code uses strtok in another thread, you get problems. Not the kind of problems you want to track down either!

I'd recommend looking for a regex implementation, or using sscanf to pull apart the string.

Try this:

char strprint[256];
char text[256];
strcpy(text, "My string to test");
while ( sscanf( text, "%s %s", strprint, text) > 0 ) {
   printf("token: %s\n", strprint);
}

Note: The 'text' string is destroyed as it's separated. This may not be the preferred behaviour =)

How to split a string with any whitespace chars as delimiters

Study this code.. good luck

    import java.util.*;
class Demo{
    public static void main(String args[]){
        Scanner input = new Scanner(System.in);
        System.out.print("Input String : ");
        String s1 = input.nextLine();   
        String[] tokens = s1.split("[\\s\\xA0]+");      
        System.out.println(tokens.length);      
        for(String s : tokens){
            System.out.println(s);

        } 
    }
}

How do I Sort a Multidimensional Array in PHP

You can use array_multisort()

Try something like this:

foreach ($mdarray as $key => $row) {
    // replace 0 with the field's index/key
    $dates[$key]  = $row[0];
}

array_multisort($dates, SORT_DESC, $mdarray);

For PHP >= 5.5.0 just extract the column to sort by. No need for the loop:

array_multisort(array_column($mdarray, 0), SORT_DESC, $mdarray);

What's the best way to build a string of delimited items in Java?

I would use Google Collections. There is a nice Join facility.
http://google-collections.googlecode.com/svn/trunk/javadoc/index.html?com/google/common/base/Join.html

But if I wanted to write it on my own,

package util;

import java.util.ArrayList;
import java.util.Iterable;
import java.util.Collections;
import java.util.Iterator;

public class Utils {
    // accept a collection of objects, since all objects have toString()
    public static String join(String delimiter, Iterable<? extends Object> objs) {
        if (objs.isEmpty()) {
            return "";
        }
        Iterator<? extends Object> iter = objs.iterator();
        StringBuilder buffer = new StringBuilder();
        buffer.append(iter.next());
        while (iter.hasNext()) {
            buffer.append(delimiter).append(iter.next());
        }
        return buffer.toString();
    }

    // for convenience
    public static String join(String delimiter, Object... objs) {
        ArrayList<Object> list = new ArrayList<Object>();
        Collections.addAll(list, objs);
        return join(delimiter, list);
    }
}

I think it works better with an object collection, since now you don't have to convert your objects to strings before you join them.

What is the regex pattern for datetime (2008-09-01 12:35:45 )?

^([2][0]\d{2}\/([0]\d|[1][0-2])\/([0-2]\d|[3][0-1]))$|^([2][0]\d{2}\/([0]\d|[1][0-2])\/([0-2]\d|[3][0-1])\s([0-1]\d|[2][0-3])\:[0-5]\d\:[0-5]\d)$

Find child element in AngularJS directive

jQlite (angular's "jQuery" port) doesn't support lookup by classes.

One solution would be to include jQuery in your app.

Another is using QuerySelector or QuerySelectorAll:

link: function(scope, element, attrs) {
   console.log(element[0].querySelector('.list-scrollable'))
}

We use the first item in the element array, which is the HTML element. element.eq(0) would yield the same.

FIDDLE

How to make html table vertically scrollable

The best way to do this is strictly separate your table into two different tables - header and body:

<div class="header">
  <table><tr><!-- th here --></tr></table>
</div>

<div class="body">
  <table><tr><!-- td here --></tr></table>
</div>

.body {
  height: 100px;
  overflow: auto
}

If your table has a big width (more than screen width), then you have to add scroll events for horizontal scrolling header and body synchroniously.

You should never touch table tags (table, tbody, thead, tfoot, tr) with CSS properties display and overflow. Dealing with DIV wrappers is much more preferable.

How do I check to see if my array includes an object?

Array's include?method accepts any object, not just a string. This should work:

@suggested_horses = [] 
@suggested_horses << Horse.first(:offset => rand(Horse.count)) 
while @suggested_horses.length < 8 
  horse = Horse.first(:offset => rand(Horse.count)) 
  @suggested_horses << horse unless @suggested_horses.include?(horse)
end

New line in JavaScript alert box

Just in case this helps anyone, when doing this from C# code behind I had to use a double escape character or I got an "unterminated string constant" JavaScript error:

ScriptManager.RegisterStartupScript(this, this.GetType(), "scriptName", "alert(\"Line 1.\\n\\nLine 2.\");", true);

Accessing Session Using ASP.NET Web API

To fix the issue:

protected void Application_PostAuthorizeRequest()
{
    System.Web.HttpContext.Current.SetSessionStateBehavior(System.Web.SessionState.SessionStateBehavior.Required);
}

in Global.asax.cs

Why use Gradle instead of Ant or Maven?

Gradle can be used for many purposes - it's a much better Swiss army knife than Ant - but it's specifically focused on multi-project builds.

First of all, Gradle is a dependency programming tool which also means it's a programming tool. With Gradle you can execute any random task in your setup and Gradle will make sure all declared dependecies are properly and timely executed. Your code can be spread across many directories in any kind of layout (tree, flat, scattered, ...).

Gradle has two distinct phases: evaluation and execution. Basically, during evaluation Gradle will look for and evaluate build scripts in the directories it is supposed to look. During execution Gradle will execute tasks which have been loaded during evaluation taking into account task inter-dependencies.

On top of these dependency programming features Gradle adds project and JAR dependency features by intergration with Apache Ivy. As you know Ivy is a much more powerful and much less opinionated dependency management tool than say Maven.

Gradle detects dependencies between projects and between projects and JARs. Gradle works with Maven repositories (download and upload) like the iBiblio one or your own repositories but also supports and other kind of repository infrastructure you might have.

In multi-project builds Gradle is both adaptable and adapts to the build's structure and architecture. You don't have to adapt your structure or architecture to your build tool as would be required with Maven.

Gradle tries very hard not to get in your way, an effort Maven almost never makes. Convention is good yet so is flexibility. Gradle gives you many more features than Maven does but most importantly in many cases Gradle will offer you a painless transition path away from Maven.

Retrieving data from a POST method in ASP.NET

You can get a form value posted to a page using code similiar to this (C#) -

string formValue;
if (!string.IsNullOrEmpty(Request.Form["txtFormValue"]))
{
  formValue= Request.Form["txtFormValue"];
}

or this (VB)

Dim formValue As String
If Not String.IsNullOrEmpty(Request.Form("txtFormValue")) Then
    formValue = Request.Form("txtFormValue")
End If

Once you have the values you need you can then construct a SQL statement and and write the data to a database.

How to get the anchor from the URL using jQuery?

For current window, you can use this:

var hash = window.location.hash.substr(1);

To get the hash value of the main window, use this:

var hash = window.top.location.hash.substr(1);

If you have a string with an URL/hash, the easiest method is:

var url = 'https://www.stackoverflow.com/questions/123/abc#10076097';
var hash = url.split('#').pop();

If you're using jQuery, use this:

var hash = $(location).attr('hash');

How do I put a variable inside a string?

Not sure exactly what all the code you posted does, but to answer the question posed in the title, you can use + as the normal string concat function as well as str().

"hello " + str(10) + " world" = "hello 10 world"

Hope that helps!

Format of the initialization string does not conform to specification starting at index 0

I copied and pasted my connection string configuration into my test project and started running into this error. The connection string worked fine in my WebAPI project. Here is my fix.

var connection = ConfigurationManager.ConnectionStrings["MyConnectionString"];
var unitOfWork = new UnitOfWork(new SqlConnection(connection.ConnectionString));

Eclipse error: "The import XXX cannot be resolved"

[Code Igniter frame work] [Import library]

Try right-clicking on the project in a Project Explorer view, choose refresh. All error markers are gone surprisingly to my case.


I'm not good at Eclipse. I started using Android Studio to develop Android. But, it's annoying to see red error markers hovering all the time whilst the whole project still works fine. It happened to me when I use composer to import library like this:

require __DIR__ . '/../../vendor/autoload.php';

use \Firebase\JWT\JWT;

How to render string with html tags in Angular 4+?

Use one way flow syntax property binding:

<div [innerHTML]="comment"></div>

From angular docs: "Angular recognizes the value as unsafe and automatically sanitizes it, which removes the <script> tag but keeps safe content such as the <b> element."

How to complete the RUNAS command in one line

The runas command does not allow a password on its command line. This is by design (and also the reason you cannot pipe a password to it as input). Raymond Chen says it nicely:

The RunAs program demands that you type the password manually. Why doesn't it accept a password on the command line?

This was a conscious decision. If it were possible to pass the password on the command line, people would start embedding passwords into batch files and logon scripts, which is laughably insecure.

In other words, the feature is missing to remove the temptation to use the feature insecurely.

Regular expression matching a multiline block of text

The following is a regular expression matching a multiline block of text:

import re
result = re.findall('(startText)(.+)((?:\n.+)+)(endText)',input)

git-diff to ignore ^M

GitHub suggests that you should make sure to only use \n as a newline character in git-handled repos. There's an option to auto-convert:

$ git config --global core.autocrlf true

Of course, this is said to convert crlf to lf, while you want to convert cr to lf. I hope this still works …

And then convert your files:

# Remove everything from the index
$ git rm --cached -r .

# Re-add all the deleted files to the index
# You should get lots of messages like: "warning: CRLF will be replaced by LF in <file>."
$ git diff --cached --name-only -z | xargs -0 git add

# Commit
$ git commit -m "Fix CRLF"

core.autocrlf is described on the man page.

Create PDF with Java

I prefer outputting my data into XML (using Castor, XStream or JAXB), then transforming it using a XSLT stylesheet into XSL-FO and render that with Apache FOP into PDF. Worked so far for 10-page reports and 400-page manuals. I found this more flexible and stylable than generating PDFs in code using iText.

What is the difference between lower bound and tight bound?

If you have something that's O(f(n)) that means there's are k, g(n) such that f(n)k g(n).

If you have something that's Ω(f(n)) that means there's are k, g(n) such that f(n)k g(n).

And if you have a something with O(f(n)) and Ω(f(n)), then it's Θ(f(n).

The Wikipedia article is decent, if a little dense.

Get div's offsetTop positions in React

A better solution with ref to avoid findDOMNode that is discouraged.

...
onScroll() {
    let offsetTop  = this.instance.getBoundingClientRect().top;
}
...
render() {
...
<Component ref={(el) => this.instance = el } />
...

How to set Android camera orientation properly?

This solution will work for all versions of Android. You can use reflection in Java to make it work for all Android devices:

Basically you should create a reflection wrapper to call the Android 2.2 setDisplayOrientation, instead of calling the specific method.

The method:

    protected void setDisplayOrientation(Camera camera, int angle){
    Method downPolymorphic;
    try
    {
        downPolymorphic = camera.getClass().getMethod("setDisplayOrientation", new Class[] { int.class });
        if (downPolymorphic != null)
            downPolymorphic.invoke(camera, new Object[] { angle });
    }
    catch (Exception e1)
    {
    }
}

And instead of using camera.setDisplayOrientation(x) use setDisplayOrientation(camera, x) :

    if (Integer.parseInt(Build.VERSION.SDK) >= 8)
        setDisplayOrientation(mCamera, 90);
    else
    {
        if (getResources().getConfiguration().orientation == Configuration.ORIENTATION_PORTRAIT)
        {
            p.set("orientation", "portrait");
            p.set("rotation", 90);
        }
        if (getResources().getConfiguration().orientation == Configuration.ORIENTATION_LANDSCAPE)
        {
            p.set("orientation", "landscape");
            p.set("rotation", 90);
        }
    }   

How do I make a transparent canvas in html5?

Just set the background of the canvas to transparent.

#canvasID{
    background:transparent;
}

Search a string in a file and delete it from this file by Shell Script

This should do it:

sed -e s/deletethis//g -i *
sed -e "s/deletethis//g" -i.backup *
sed -e "s/deletethis//g" -i .backup *

it will replace all occurrences of "deletethis" with "" (nothing) in all files (*), editing them in place.

In the second form the pattern can be edited a little safer, and it makes backups of any modified files, by suffixing them with ".backup".

The third form is the way some versions of sed like it. (e.g. Mac OS X)

man sed for more information.

How to insert a row in an HTML table body in JavaScript

You're close. Just add the row to the tbody instead of table:

myTbody.insertRow();

Just get a reference to tBody (myTbody) before use. Notice that you don't need to pass the last position in a table; it's automatically positioned at the end when omitting argument.

A live demo is at jsFiddle.

What is the difference between syntax and semantics in programming languages?

  • You need correct syntax to compile.
  • You need correct semantics to make it work.

APK signing error : Failed to read key from keystore

In my case, while copying the text from other source it somehow included the space at the end of clipboard entry. That way the key password had a space at the end.

How to load assemblies in PowerShell?

None of the answers helped me, so I'm posting the solution that worked for me, all I had to do is to import the SQLPS module, I realized this when by accident I ran the Restore-SqlDatabase command and started working, meaning that the assembly was referenced in that module somehow.

Just run:

Import-module SQLPS

Note: Thanks Jason for noting that SQLPS is deprecated

instead run:

Import-Module SqlServer

or

Install-Module SqlServer

SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder". error

If you are using Gradle add this:

dependencies { 
... 
compile "org.slf4j:slf4j-simple:1.7.9" 
... 
}

php timeout - set_time_limit(0); - don't work

Check the php.ini

ini_set('max_execution_time', 300); //300 seconds = 5 minutes

ini_set('max_execution_time', 0); //0=NOLIMIT

How can I programmatically determine if my app is running in the iphone simulator?

Works for Swift 5 and Xcode 12

Use this code:

#if targetEnvironment(simulator)
   // Simulator
#else
   // Device
#endif

Opening a remote machine's Windows C drive

If you need a drive letter (some applications don't like UNC style paths that start with a machine-name) you can "map a drive" to a UNC path. Right-click on "My Computer" and select Map Network Drive... or use this command line:

NET USE z: \server\c$\folder1\folder2

NET USE y: \server\d$

Note that you can map drive-to-drive or drill down and map to sub-folder.

adding a datatable in a dataset

you have to set the tableName you want to your dtimage that is for instance

dtImage.TableName="mydtimage";


if(!ds.Tables.Contains(dtImage.TableName))
        ds.Tables.Add(dtImage);

it will be reflected in dataset because dataset is a container of your datatable dtimage and you have a reference on your dtimage

How to access the GET parameters after "?" in Express?

The express manual says that you should use req.query to access the QueryString.

// Requesting /display/post?size=small
app.get('/display/post', function(req, res, next) {

  var isSmall = req.query.size === 'small'; // > true
  // ...

});

How to unmerge a Git merge?

If you haven't committed the merge, then use:

git merge --abort

swift UITableView set rowHeight

Problem Cause:
The problem is that the cell has not been created yet. TableView first calculates the height for row and then populates the data for each row, so the rows array has not been created when heightForRow method gets called. So your app is trying to access a memory location which it does not have the permission to and therefor you get the EXC_BAD_ACCESS message.

How to achieve self sizing TableViewCell in UITableView:
Just set proper constraints for your views contained in TableViewCell's view in StoryBoard. Remember you shouldn't set height constraints to TableViewCell's root view, its height should be properly computable by the height of its subviews -- This is like what you do to set proper constraints for UIScrollView. This way your cells will get different heights according to their subviews. No additional action needed

Multiple argument IF statement - T-SQL

Seems to work fine.

If you have an empty BEGIN ... END block you might see

Msg 102, Level 15, State 1, Line 10 Incorrect syntax near 'END'.

Converting a column within pandas dataframe from int to string

Just for an additional reference.

All of the above answers will work in case of a data frame. But if you are using lambda while creating / modify a column this won't work, Because there it is considered as a int attribute instead of pandas series. You have to use str( target_attribute ) to make it as a string. Please refer the below example.

def add_zero_in_prefix(df):
    if(df['Hour']<10):
        return '0' + str(df['Hour'])

data['str_hr'] = data.apply(add_zero_in_prefix, axis=1)

"npm config set registry https://registry.npmjs.org/" is not working in windows bat file

By executing your .bat you are setting config for only that session not globally. When you open and another cmd prompt and run npm install that config will not set for this session so modify your .bat file as

@echo off
npm config set registry https://registry.npmjs.org/
@cmd.exe /K

Using a custom typeface in Android

I don't know if it changes the whole app, but I have managed to change some components that couldn't otherwise be changed by doing this:

Typeface tf = Typeface.createFromAsset(getAssets(), "fonts/Lucida Sans Unicode.ttf");
Typeface.class.getField("DEFAULT").setAccessible(true);
Typeface.class.getField("DEFAULT_BOLD").setAccessible(true);
Typeface.class.getField("DEFAULT").set(null, tf);
Typeface.class.getField("DEFAULT_BOLD").set(null, tf);

Difference between Static and final?

Static variable values can get changed although one copy of the variable traverse through the application, whereas Final Variable values can be initialized once and cannot be changed throughout the application.

Git - deleted some files locally, how do I get them from a remote repository

Since git is a distributed VCS, your local repository contains all of the information. No downloading is necessary; you just need to extract the content you want from the repo at your fingertips.

If you haven't committed the deletion, just check out the files from your current commit:

git checkout HEAD <path>

If you have committed the deletion, you need to check out the files from a commit that has them. Presumably it would be the previous commit:

git checkout HEAD^ <path>

but if it's n commits ago, use HEAD~n, or simply fire up gitk, find the SHA1 of the appropriate commit, and paste it in.

Vector erase iterator

Because the method erase in vector return the next iterator of the passed iterator.

I will give example of how to remove element in vector when iterating.

void test_del_vector(){
    std::vector<int> vecInt{0, 1, 2, 3, 4, 5};

    //method 1
    for(auto it = vecInt.begin();it != vecInt.end();){
        if(*it % 2){// remove all the odds
            it = vecInt.erase(it); // note it will = next(it) after erase
        } else{
            ++it;
        }
    }

    // output all the remaining elements
    for(auto const& it:vecInt)std::cout<<it;
    std::cout<<std::endl;

    // recreate vecInt, and use method 2
    vecInt = {0, 1, 2, 3, 4, 5};
    //method 2
    for(auto it=std::begin(vecInt);it!=std::end(vecInt);){
        if (*it % 2){
            it = vecInt.erase(it);
        }else{
            ++it;
        }
    }

    // output all the remaining elements
    for(auto const& it:vecInt)std::cout<<it;
    std::cout<<std::endl;

    // recreate vecInt, and use method 3
    vecInt = {0, 1, 2, 3, 4, 5};
    //method 3
    vecInt.erase(std::remove_if(vecInt.begin(), vecInt.end(),
                 [](const int a){return a % 2;}),
                 vecInt.end());

    // output all the remaining elements
    for(auto const& it:vecInt)std::cout<<it;
    std::cout<<std::endl;

}

output aw below:

024
024
024

A more generate method:

template<class Container, class F>
void erase_where(Container& c, F&& f)
{
    c.erase(std::remove_if(c.begin(), c.end(),std::forward<F>(f)),
            c.end());
}

void test_del_vector(){
    std::vector<int> vecInt{0, 1, 2, 3, 4, 5};
    //method 4
    auto is_odd = [](int x){return x % 2;};
    erase_where(vecInt, is_odd);

    // output all the remaining elements
    for(auto const& it:vecInt)std::cout<<it;
    std::cout<<std::endl;    
}

When a 'blur' event occurs, how can I find out which element focus went *to*?

It's possible to use mousedown event of document instead of blur:

$(document).mousedown(function(){
  if ($(event.target).attr("id") == "mySpan") {
    // some process
  }
});

How do I properly set the permgen size?

So you are doing the right thing concerning "-XX:MaxPermSize=512m": it is indeed the correct syntax. You could try to set these options directly to the Catalyna server files so they are used on server start.

Maybe this post will help you!

How to make sure that Tomcat6 reads CATALINA_OPTS on Windows?

How to delete multiple values from a vector?

First we can define a new operator,

"%ni%" = Negate( "%in%" )

Then, its like x not in remove

x <- 1:10
remove <- c(2,3,5)
x <- x[ x %ni% remove ]

or why to go for remove, go directly

x <- x[ x %ni% c(2,3,5)]

Android Animation Alpha

Setting alpha to 1 before starting the animation worked for me:

AlphaAnimation animation1 = new AlphaAnimation(0.2f, 1.0f);
animation1.setDuration(500);
iv.setAlpha(1f);
iv.startAnimation(animation1);

At least on my tests, there's no flickering because of setting alpha before starting the animation. It just works fine.

Looping Over Result Sets in MySQL

Use cursors.

A cursor can be thought of like a buffered reader, when reading through a document. If you think of each row as a line in a document, then you would read the next line, perform your operations, and then advance the cursor.

git - remote add origin vs remote set-url origin

if you have existing project and you would like to add remote repository url then you need to do following command

git init

if you would like to add readme.md file then you can create it and add it using below command.

git add README.md

make your first commit using below command

git commit -m "first commit"

Now you completed all local repository process, now how you add remote repository url ? check below command this is for ssh url, you can change it for https.

git remote add origin [email protected]:user-name/repository-name.git

How you push your first commit see below command :

git push -u origin master

Google Maps shows "For development purposes only"

As recommended in a comment, I used the "Google Maps Platform API Checker" Chrome add-in to identify and resolve the issue.

Essentially, this add-in directed me to here where I was able to sign in to Google and create a free API key.

Afterwards, I updated my JavaScript and it immediately resolved this issue.

Old JavaScript: ...script src="https://maps.googleapis.com/maps/api/js?v=3" ...

Updated Javascript:...script src="https://maps.googleapis.com/maps/api/js?key=*****GOOGLE API KEY******&v=3" ...

The add-in then validated the JS API call. Hope this helps someone resolve the issue quickly!

enter image description here

Django CSRF Cookie Not Set

from django.http import HttpResponse
from django.views.decorators.csrf import csrf_exempt

@csrf_exempt 
def your_view(request):
    if request.method == "POST":
        # do something
    return HttpResponse("Your response")

How to use wait and notify in Java without IllegalMonitorStateException?

While using the wait and notify or notifyAll methods in Java the following things must be remembered:

  1. Use notifyAll instead of notify if you expect that more than one thread will be waiting for a lock.
  2. The wait and notify methods must be called in a synchronized context. See the link for a more detailed explanation.
  3. Always call the wait() method in a loop because if multiple threads are waiting for a lock and one of them got the lock and reset the condition, then the other threads need to check the condition after they wake up to see whether they need to wait again or can start processing.
  4. Use the same object for calling wait() and notify() method; every object has its own lock so calling wait() on object A and notify() on object B will not make any sense.

Getting a HeadlessException: No X11 DISPLAY variable was set

I think you are trying to run some utility or shell script from UNIX\LINUX which has some GUI. Anyways

SOLUTION: dude all you need is an XServer & X11 forwarding enabled. I use XMing (XServer). You are already enabling X11 forwarding. Just Install it(XMing) and keep it running when you create the session with PuTTY.

Writing to a new file if it doesn't exist, and appending to a file if it does

It's not clear to me exactly where the high-score that you're interested in is stored, but the code below should be what you need to check if the file exists and append to it if desired. I prefer this method to the "try/except".

import os
player = 'bob'

filename = player+'.txt'

if os.path.exists(filename):
    append_write = 'a' # append if already exists
else:
    append_write = 'w' # make a new file if not

highscore = open(filename,append_write)
highscore.write("Username: " + player + '\n')
highscore.close()

assign value using linq

using Linq would be:

 listOfCompany.Where(c=> c.id == 1).FirstOrDefault().Name = "Whatever Name";

UPDATE

This can be simplified to be...

 listOfCompany.FirstOrDefault(c=> c.id == 1).Name = "Whatever Name";

UPDATE

For multiple items (condition is met by multiple items):

 listOfCompany.Where(c=> c.id == 1).ToList().ForEach(cc => cc.Name = "Whatever Name");

How to create a sticky footer that plays well with Bootstrap 3

The best way is to do the following:
HTML:Sticky Footer
CSS: CSS for Sticky Footer
HTML Code Sample:

<div class="container">
  <div class="page-header">
    <h1>Sticky footer</h1>
  </div>
  <p class="lead">Pin a fixed-height footer to the bottom of the viewport in desktop browsers with this custom HTML and CSS.</p>
  <p>Use <a href="../sticky-footer-navbar">the sticky footer with a fixed navbar</a> if need be, too.</p>
</div>

<footer class="footer">
  <div class="container">
    <p class="text-muted">Place sticky footer content here.</p>
  </div>
</footer>

CSS Code Sample:

    html {
  position: relative;
  min-height: 100%;
}
body {
  /* Margin bottom by footer height */
  margin-bottom: 60px;
}
.footer {
  position: absolute;
  bottom: 0;
  width: 100%;
  /* Set the fixed height of the footer here */
  height: 60px;
  background-color: #f5f5f5;
}

Another little tweak might make it more perfect (depends on your project), so it will not affect footer on mobile views.

@media (max-width:768px){ .footer{position:absolute;width:100%;} }
@media (min-width:768px){ .footer{position:absolute;bottom:0;height:60px;width:100%;}}

How to convert hex to ASCII characters in the Linux shell?

You can use this command (python script) for larger inputs:

echo 58595a | python -c "import sys; import binascii; print(binascii.unhexlify(sys.stdin.read().strip()).decode())"

The result will be:

XYZ

And for more simplicity, define an alias:

alias hexdecoder='python -c "import sys; import binascii; print(binascii.unhexlify(sys.stdin.read().strip()).decode())"'

echo 58595a | hexdecoder

Make Bootstrap 3 Tabs Responsive

I have created a directive in agularJS supported with ng-bootStrap components

https://angular-ui.github.io/bootstrap/#!#tabs

here I share the code that I implemented

[https://jsfiddle.net/k1r02/u6gpv4dc/][1]

  [1]: https://jsfiddle.net/k1r02/u6gpv4dc/

LINK : fatal error LNK1104: cannot open file 'D:\...\MyProj.exe'

the file can be locked because it is being run now. Try killing the process with a task manager.

Is it possible to opt-out of dark mode on iOS 13?

Latest Update-

If you're using Xcode 10.x, then the default UIUserInterfaceStyle is light for iOS 13.x. When run on an iOS 13 device, it will work in Light Mode only.

No need to explicitly add the UIUserInterfaceStyle key in Info.plist file, adding it will give an error when you Validate your app, saying:

Invalid Info.plist Key. The key 'UIUserInterfaceStyle' in the Payload/AppName.appInfo.plist file is not valid.

Only add the UIUserInterfaceStyle key in Info.plist file when using Xcode 11.x.

Exception is never thrown in body of corresponding try statement

Always remember that in case of checked exception you can catch only after throwing the exception(either you throw or any inbuilt method used in your code can throw) ,but in case of unchecked exception You an catch even when you have not thrown that exception.

Post-increment and Pre-increment concept?

The pre increment is before increment value ++ e.g.:

(++v) or 1 + v

The post increment is after increment the value ++ e.g.:

(rmv++) or rmv + 1

Program:

int rmv = 10, vivek = 10;
cout << "rmv++ = " << rmv++ << endl; // the value is 10
cout << "++vivek = " << ++vivek; // the value is 11

C# cannot convert method to non delegate type

To execute a method you need to add parentheses, even if the method does not take arguments.

So it should be:

string t = obj.getTitle();

Function to check if a string is a date

In case you don't know the date format:

/**
 * Check if the value is a valid date
 *
 * @param mixed $value
 *
 * @return boolean
 */
function isDate($value) 
{
    if (!$value) {
        return false;
    }

    try {
        new \DateTime($value);
        return true;
    } catch (\Exception $e) {
        return false;
    }
}

var_dump(isDate('2017-01-06')); // true
var_dump(isDate('2017-13-06')); // false
var_dump(isDate('2017-02-06T04:20:33')); // true
var_dump(isDate('2017/02/06')); // true
var_dump(isDate('3.6. 2017')); // true
var_dump(isDate(null)); // false
var_dump(isDate(true)); // false
var_dump(isDate(false)); // false
var_dump(isDate('')); // false
var_dump(isDate(45)); // false

On duplicate key ignore?

Mysql has this handy UPDATE INTO command ;)

edit Looks like they renamed it to REPLACE

REPLACE works exactly like INSERT, except that if an old row in the table has the same value as a new row for a PRIMARY KEY or a UNIQUE index, the old row is deleted before the new row is inserted

WPF Binding to parent DataContext

Because of things like this, as a general rule of thumb, I try to avoid as much XAML "trickery" as possible and keep the XAML as dumb and simple as possible and do the rest in the ViewModel (or attached properties or IValueConverters etc. if really necessary).

If possible I would give the ViewModel of the current DataContext a reference (i.e. property) to the relevant parent ViewModel

public class ThisViewModel : ViewModelBase
{
    TypeOfAncestorViewModel Parent { get; set; }
}

and bind against that directly instead.

<TextBox Text="{Binding Parent}" />

ConfigurationManager.AppSettings - How to modify and save?

You can change it manually:

private void UpdateConfigFile(string appConfigPath, string key, string value)
{
     var appConfigContent = File.ReadAllText(appConfigPath);
     var searchedString = $"<add key=\"{key}\" value=\"";
     var index = appConfigContent.IndexOf(searchedString) + searchedString.Length;
     var currentValue = appConfigContent.Substring(index, appConfigContent.IndexOf("\"", index) - index);
     var newContent = appConfigContent.Replace($"{searchedString}{currentValue}\"", $"{searchedString}{newValue}\"");
     File.WriteAllText(appConfigPath, newContent);
}

Why do I get the "Unhandled exception type IOException"?

add "throws IOException" to your method like this:

public static void main(String args[]) throws  IOException{

        FileReader reader=new FileReader("db.properties");

        Properties p=new Properties();
        p.load(reader);


    }

Get combobox value in Java swing

If the string is empty, comboBox.getSelectedItem().toString() will give a NullPointerException. So better to typecast by (String).

Using setattr() in python

Suppose you want to give attributes to an instance which was previously not written in code. The setattr() does just that. It takes the instance of the class self and key and value to set.

class Example:
    def __init__(self, **kwargs):
        for key, value in kwargs.items():
            setattr(self, key, value)

Tokenizing Error: java.util.regex.PatternSyntaxException, dangling metacharacter '*'

The first answer covers it.

Im guessing that somewhere down the line you may decide to store your info in a different class/structure. In that case you probably wouldn't want the results going in to an array from the split() method.

You didn't ask for it, but I'm bored, so here is an example, hope it's helpful.

This might be the class you write to represent a single person:


class Person {
            public String firstName;
            public String lastName;
            public int id;
            public int age;

      public Person(String firstName, String lastName, int id, int age) {
         this.firstName = firstName;
         this.lastName = lastName;
         this.id = id;
         this.age = age;
      }  
      // Add 'get' and 'set' method if you want to make the attributes private rather than public.
} 

Then, the version of the parsing code you originally posted would look something like this: (This stores them in a LinkedList, you could use something else like a Hashtable, etc..)


try 
{
    String ruta="entrada.al";
    BufferedReader reader = new BufferedReader(new FileReader(ruta));

    LinkedList<Person> list = new LinkedList<Person>();

    String line = null;         
    while ((line=reader.readLine())!=null)
    {
        if (!(line.equals("%")))
        {
            StringTokenizer st = new StringTokenizer(line, "*");
            if (st.countTokens() == 4)          
                list.add(new Person(st.nextToken(), st.nextToken(), Integer.parseInt(st.nextToken()), Integer.parseInt(st.nextToken)));         
            else            
                // whatever you want to do to account for an invalid entry
                  // in your file. (not 4 '*' delimiters on a line). Or you
                  // could write the 'if' clause differently to account for it          
        }
    }
    reader.close();
}

How can I develop for iPhone using a Windows development machine?

You can use WinChain

Quoting the project page:

It's the easiest way to build the iPhone toolchain on a Windows XP/Vista computer, which in turn, can take Objective-C source code that you write using their UIKit Headers (included with winChain) and compile it into an application that you can use on your iPhone.

Zero-pad digits in string

First of all, your description is misleading. Double is a floating point data type. You presumably want to pad your digits with leading zeros in a string. The following code does that:

$s = sprintf('%02d', $digit);

For more information, refer to the documentation of sprintf.

How to implement private method in ES6 class with Traceur

I hope this can be helpful. :)

I. Declaring vars, functions inside IIFE(Immediately-invoked function expression), those can be used only in the anonymous function. (It can be good to use "let, const" keywords without using 'var' when you need to change code for ES6.)

let Name = (function() {
  const _privateHello = function() {
  }
  class Name {
    constructor() {
    }
    publicMethod() {
      _privateHello()
    }
  }
  return Name;
})();

II. WeakMap object can be good for memoryleak trouble.

Stored variables in the WeakMap will be removed when the instance will be removed. Check this article. (Managing the private data of ES6 classes)

let Name = (function() {
  const _privateName = new WeakMap();
})();

III. Let's put all together.

let Name = (function() {
  const _privateName = new WeakMap();
  const _privateHello = function(fullName) {
    console.log("Hello, " + fullName);
  }

  class Name {
    constructor(firstName, lastName) {
      _privateName.set(this, {firstName: firstName, lastName: lastName});
    }
    static printName(name) {
      let privateName = _privateName.get(name);
      let _fullname = privateName.firstName + " " + privateName.lastName;
      _privateHello(_fullname);
    }
    printName() {
      let privateName = _privateName.get(this);
      let _fullname = privateName.firstName + " " + privateName.lastName;
      _privateHello(_fullname);
    }
  }

  return Name;
})();

var aMan = new Name("JH", "Son");
aMan.printName(); // "Hello, JH Son"
Name.printName(aMan); // "Hello, JH Son"

Logical XOR operator in C++?

#if defined(__OBJC__)
    #define __bool BOOL
    #include <stdbool.h>
    #define __bool bool
#endif

static inline __bool xor(__bool a, __bool b)
{
    return (!a && b) || (a && !b);
}

It works as defined. The conditionals are to detect if you are using Objective-C, which is asking for BOOL instead of bool (the length is different!)

Format SQL in SQL Server Management Studio

Late answer, but hopefully worthwhile: The Poor Man's T-SQL Formatter is an open-source (free) T-SQL formatter with complete T-SQL batch/script support (any DDL, any DML), SSMS Plugin, command-line bulk formatter, and other options.

It's available for immediate/online use at http://poorsql.com, and just today graduated to "version 1.0" (it was in beta version for a few months), having just acquired support for MERGE statements, OUTPUT clauses, and other finicky stuff.

The SSMS Add-in allows you to set your own hotkey (default is Ctrl-K, Ctrl-F, to match Visual Studio), and formats the entire script or just the code you have selected/highlighted, if any. Output formatting is customizable.

In SSMS 2008 it combines nicely with the built-in intelli-sense, effectively providing more-or-less the same base functionality as Red Gate's SQL Prompt (SQL Prompt does, of course, have extra stuff, like snippets, quick object scripting, etc).

Feedback/feature requests are more than welcome, please give it a whirl if you get the chance!

Disclosure: This is probably obvious already but I wrote this library/tool/site, so this answer is also shameless self-promotion :)

Activating Anaconda Environment in VsCode

I found out that if we do not specify which python version we want the environment which is created is completely empty. Thus, to resolve this issue what I did is that I gave the python version as well. i.e

conda create --name env_name python=3.6

so what it does now is that it installs python 3.6 and now we can select the interpreter. For that follow the below-mentioned steps:

  1. Firstly, open the command palette using Ctrl + Shift + P

  2. Secondly, Select Python: select Interpreter

  3. Now, Select Enter interpreter path

  4. We have to add the path where the env is, the default location will be C:\Users\YourUserName\Anaconda3\envs\env_name

Finally, you have successfully activated your environment. It might now be the best way but it worked for me. Let me know if there is any issue.

How to list all databases in the mongo shell?

I have found one solution, where admin()/others didn't worked.

const { promisify } = require('util');
const exec = promisify(require('child_process').exec)
async function test() {
  var res = await exec('mongo  --eval "db.adminCommand( { listDatabases: 1 }         
)" --quiet')
  return { res }
}

test()
  .then(resp => {
    console.log('All dbs', JSON.parse(resp.res.stdout).databases)
  })
test()

jquery AJAX and json format

Currently you are sending the data as typical POST values, which look like this:

first_name=somename&last_name=somesurname

If you want to send data as json you need to create an object with data and stringify it.

data: JSON.stringify(someobject)

MySQL Nested Select Query?

You just need to write the first query as a subquery (derived table), inside parentheses, pick an alias for it (t below) and alias the columns as well.

The DISTINCT can also be safely removed as the internal GROUP BY makes it redundant:

SELECT DATE(`date`) AS `date` , COUNT(`player_name`) AS `player_count`
FROM (
    SELECT MIN(`date`) AS `date`, `player_name`
    FROM `player_playtime`
    GROUP BY `player_name`
) AS t
GROUP BY DATE( `date`) DESC LIMIT 60 ;

Since the COUNT is now obvious that is only counting rows of the derived table, you can replace it with COUNT(*) and further simplify the query:

SELECT t.date , COUNT(*) AS player_count
FROM (
    SELECT DATE(MIN(`date`)) AS date
    FROM player_playtime
    GROUP BY player_name
) AS t
GROUP BY t.date DESC LIMIT 60 ;

How to make CREATE OR REPLACE VIEW work in SQL Server?

SQL Server 2016 Answer

With SQL Server 2016 you can now do (MSDN Source):

DROP VIEW IF EXISTS dbo.MyView

Including one C source file in another?

is it ok? yes, it will compile

is it recommended? no - .c files compile to .obj files, which are linked together after compilation (by the linker) into the executable (or library), so there is no need to include one .c file in another. What you probably want to do instead is to make a .h file that lists the functions/variables available in the other .c file, and include the .h file

Not able to install Python packages [SSL: TLSV1_ALERT_PROTOCOL_VERSION]

To upgrade the local version I used a slight variant:

curl https://bootstrap.pypa.io/get-pip.py | python - --user

This problem arises if you keep your pip and packages under your home directory as described in this gist.

FTP/SFTP access to an Amazon S3 Bucket

As other posters have pointed out, there are some limitations with the AWS Transfer for SFTP service. You need to closely align requirements. For example, there are no quotas, whitelists/blacklists, file type limits, and non key based access requires external services. There is also a certain overhead relating to user management and IAM, which can get to be a pain at scale.

We have been running an SFTP S3 Proxy Gateway for about 5 years now for our customers. The core solution is wrapped in a collection of Docker services and deployed in whatever context is needed, even on-premise or local development servers. The use case for us is a little different as our solution is focused data processing and pipelines vs a file share. In a Salesforce example, a customer will use SFTP as the transport method sending email, purchase...data to an SFTP/S3 enpoint. This is mapped an object key on S3. Upon arrival, the data is picked up, processed, routed and loaded to a warehouse. We also have fairly significant auditing requirements for each transfer, something the Cloudwatch logs for AWS do not directly provide.

As other have mentioned, rolling your own is an option too. Using AWS Lightsail you can setup a cluster, say 4, of $10 2GB instances using either Route 53 or an ELB.

In general, it is great to see AWS offer this service and I expect it to mature over time. However, depending on your use case, alternative solutions may be a better fit.

Copy files from one directory into an existing directory

If you want to copy something from one directory into the current directory, do this:

cp dir1/* .

This assumes you're not trying to copy hidden files.

How to code a very simple login system with java

You will need to use java.util.Scanner for this issue.

Here is a good login program for the console:

import java.util.Scanner; // I use scanner because it's command line.

public class Login {
public void run() {
    Scanner scan = new Scanner (new File("the\\dir\\myFile.extension"));
    Scanner keyboard = new Scanner (System.in);
    String user = scan.nextLine();
    String pass = scan.nextLine(); // looks at selected file in scan

    String inpUser = keyboard.nextLine();
    String inpPass = keyboard.nextLine(); // gets input from user

    if (inpUser.equals(user) && inpPass.equals(pass)) {
        System.out.print("your login message");
    } else {
        System.out.print("your error message");
    }
}
} 

Of course, you will use Scanner scanner = new Scanner (File toScan); but not for user input.

Happy coding!

As a last note, you are at least a decent programmer if you can make Swing components.

Comments in .gitignore?

Yes, you may put comments in there. They however must start at the beginning of a line.

cf. http://git-scm.com/book/en/Git-Basics-Recording-Changes-to-the-Repository#Ignoring-Files

The rules for the patterns you can put in the .gitignore file are as follows:
- Blank lines or lines starting with # are ignored.
[…]

The comment character is #, example:

# no .a files
*.a

How to download/checkout a project from Google Code in Windows?

If you don't want to install TortoiseSVN, you can simply install 'Subversion for Windows' from here:

http://sourceforge.net/projects/win32svn/

After installing, just open up a command prompt, go the folder you want to download into, then past in the checkout command as indicated on the project's 'source' page. E.g.

svn checkout http://projectname.googlecode.com/svn/trunk/ projectname-read-only

Note the space between the URL and the last string is intentional, the last string is the folder name into which the source will be downloaded.

How to watch for form changes in Angular

UPD. The answer and demo are updated to align with latest Angular.


You can subscribe to entire form changes due to the fact that FormGroup representing a form provides valueChanges property which is an Observerable instance:

this.form.valueChanges.subscribe(data => console.log('Form changes', data));

In this case you would need to construct form manually using FormBuilder. Something like this:

export class App {
  constructor(private formBuilder: FormBuilder) {
    this.form = formBuilder.group({
      firstName: 'Thomas',
      lastName: 'Mann'
    })

    this.form.valueChanges.subscribe(data => {
      console.log('Form changes', data)
      this.output = data
    })
  }
}

Check out valueChanges in action in this demo: http://plnkr.co/edit/xOz5xaQyMlRzSrgtt7Wn?p=preview

Make copy of an array

you can use

int[] a = new int[]{1,2,3,4,5};
int[] b = a.clone();

as well.

Error in launching AVD with AMD processor

For those who are using Android Studio based on Jetbrains:

  1. Goto Tools > Android > SDK Manager

  2. Under Extras --> select the checkbox Intel x86 Emulator Accelorator

For those who are unable to use Nexus AVD can also try using Generic AVD.

  1. Goto Tools > Android > AVD Manager

Then create a new Genreic AVD with something like QVGA and use for your app. This AVD does not use hardware acceleration.

Set proxy through windows command line including login parameters

If you are using Microsoft windows environment then you can set a variable named HTTP_PROXY, FTP_PROXY, or HTTPS_PROXY depending on the requirement.

I have used following settings for allowing my commands at windows command prompt to use the browser proxy to access internet.

set HTTP_PROXY=http://proxy_userid:proxy_password@proxy_ip:proxy_port

The parameters on right must be replaced with actual values.

Once the variable HTTP_PROXY is set, all our subsequent commands executed at windows command prompt will be able to access internet through the proxy along with the authentication provided.

Additionally if you want to use ftp and https as well to use the same proxy then you may like to the following environment variables as well.

set FTP_PROXY=%HTTP_PROXY%

set HTTPS_PROXY=%HTTP_PROXY%

master branch and 'origin/master' have diverged, how to 'undiverge' branches'?

Met this problem when I created a branch based on branch A by

git checkout -b a

and then I set the up stream of branch a to origin branch B by

git branch -u origin/B

Then I got the error message above.

One way to solve this problem for me was,

  • Delete the branch a
  • Create a new branch b by
git checkout -b b origin/B

Excel VBA Loop on columns

Another method to try out. Also select could be replaced when you set the initial column into a Range object. Performance wise it helps.

Dim rng as Range

Set rng = WorkSheets(1).Range("A1") '-- you may change the sheet name according to yours.

'-- here is your loop
i = 1
Do
   '-- do something: e.g. show the address of the column that you are currently in
   Msgbox rng.offset(0,i).Address 
   i = i + 1
Loop Until i > 10

** Two methods to get the column name using column number**

  • Split()

code

colName = Split(Range.Offset(0,i).Address, "$")(1)
  • String manipulation:

code

Function myColName(colNum as Long) as String
    myColName = Left(Range(0, colNum).Address(False, False), _ 
    1 - (colNum > 10)) 
End Function 

Get the length of a String

As of Swift 4+

It's just:

test1.count

for reasons.

(Thanks to Martin R)

As of Swift 2:

With Swift 2, Apple has changed global functions to protocol extensions, extensions that match any type conforming to a protocol. Thus the new syntax is:

test1.characters.count

(Thanks to JohnDifool for the heads up)

As of Swift 1

Use the count characters method:

let unusualMenagerie = "Koala &#128040;, Snail &#128012;, Penguin &#128039;, Dromedary &#128042;"
println("unusualMenagerie has \(count(unusualMenagerie)) characters")
// prints "unusualMenagerie has 40 characters"

right from the Apple Swift Guide

(note, for versions of Swift earlier than 1.2, this would be countElements(unusualMenagerie) instead)

for your variable, it would be

length = count(test1) // was countElements in earlier versions of Swift

Or you can use test1.utf16count

How can I manually generate a .pyc file from a .py file

There is two way to do this

  1. Command line
  2. Using python program

If you are using command line use python -m compileall <argument> to compile python code to python binary code. Ex: python -m compileall -x ./*

Or, You can use this code to compile your library into byte-code.

import compileall
import os

lib_path = "your_lib_path"
build_path = "your-dest_path"

compileall.compile_dir(lib_path, force=True, legacy=True)

def moveToNewLocation(cu_path):
    for file in os.listdir(cu_path):
        if os.path.isdir(os.path.join(cu_path, file)):
            compile(os.path.join(cu_path, file))
        elif file.endswith(".pyc"):
            dest = os.path.join(build_path, cu_path ,file)
            os.makedirs(os.path.dirname(dest), exist_ok=True)
            os.rename(os.path.join(cu_path, file), dest)

moveToNewLocation(lib_path)

look at ? docs.python.org for detailed documentation

Creating all possible k combinations of n items in C++

Here is an algorithm i came up with for solving this problem. You should be able to modify it to work with your code.

void r_nCr(const unsigned int &startNum, const unsigned int &bitVal, const unsigned int &testNum) // Should be called with arguments (2^r)-1, 2^(r-1), 2^(n-1)
{
    unsigned int n = (startNum - bitVal) << 1;
    n += bitVal ? 1 : 0;

    for (unsigned int i = log2(testNum) + 1; i > 0; i--) // Prints combination as a series of 1s and 0s
        cout << (n >> (i - 1) & 1);
    cout << endl;

    if (!(n & testNum) && n != startNum)
        r_nCr(n, bitVal, testNum);

    if (bitVal && bitVal < testNum)
        r_nCr(startNum, bitVal >> 1, testNum);
}

You can see an explanation of how it works here.

Defining a variable with or without export

Although not explicitly mentioned in the discussion, it is NOT necessary to use export when spawning a subshell from inside bash since all the variables are copied into the child process.

Where does PostgreSQL store the database?

I'd bet you're asking this question because you've tried pg_ctl start and received the following error:

pg_ctl: no database directory specified and environment variable PGDATA unset

In other words, you're looking for the directory to put after -D in your pg_ctl start command.

In this case, the directory you're looking for contains these files.

PG_VERSION      pg_dynshmem     pg_multixact
pg_snapshots    pg_tblspc       postgresql.conf
base            pg_hba.conf     pg_notify   
pg_stat         pg_twophase     postmaster.opts
global          pg_ident.conf   pg_replslot
pg_stat_tmp     pg_xlog         postmaster.pid
pg_clog         pg_logical      pg_serial
pg_subtrans     postgresql.auto.conf    server.log

You can locate it by locating any of the files and directories above using the search provided with your OS.

For example in my case (a HomeBrew install on Mac OS X), these files are located in /usr/local/var/postgres. To start the server I type:

pg_ctl -D /usr/local/var/postgres -w start

... and it works.

Redraw datatables after using ajax to refresh the table content?

Try destroying the datatable with bDestroy:true like this:

$("#ajaxchange").click(function(){
    var campaign_id = $("#campaigns_id").val();
    var fromDate = $("#from").val();
    var toDate = $("#to").val();

    var url = 'http://domain.com/account/campaign/ajaxrefreshgrid?format=html';
    $.post(url, { campaignId: campaign_id, fromdate: fromDate, todate: toDate},
            function( data ) { 

                $("#ajaxresponse").html(data);

                oTable6 = $('#rankings').dataTable( {"bDestroy":true,
                    "sDom":'t<"bottom"filp><"clear">',
                    "bAutoWidth": false,
                    "sPaginationType": "full_numbers",
"aoColumns": [ 
        { "bSortable": false, "sWidth": "10px" },
        null,
        null,
        null,
        null,
        null,
        null,
        null,
        null,
        null,
        null,
        null
        ]

} 
);
            });

});

bDestroy: true will first destroy and datatable instance associated with that selector before reinitializing a new one.

Add the loading screen in starting of the android application

Write the code:

  @Override
    public void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
        setContentView(R.layout.splash);

        Thread welcomeThread = new Thread() {

            @Override
            public void run() {
                try {
                    super.run();
                    sleep(10000)  //Delay of 10 seconds
                } catch (Exception e) {

                } finally {

                    Intent i = new Intent(SplashActivity.this,
                            MainActivity.class);
                    startActivity(i);
                    finish();
                }
            }
        };
        welcomeThread.start();
    }

How to debug a Flask app

You can use app.run(debug=True) for the Werkzeug Debugger edit as mentioned below, and I should have known.

Configure hibernate (using JPA) to store Y/N for type Boolean instead of 0/1

To even do better boolean mapping to Y/N, add to your hibernate configuration:

<!-- when using type="yes_no" for booleans, the line below allow booleans in HQL expressions: -->
<property name="hibernate.query.substitutions">true 'Y', false 'N'</property>

Now you can use booleans in HQL, for example:

"FROM " + SomeDomainClass.class.getName() + " somedomainclass " +
"WHERE somedomainclass.someboolean = false"

Upload Progress Bar in PHP

Another uploader full JS : http://developers.sirika.com/mfu/

  • Its free ( BSD licence )
  • Internationalizable
  • cross browser compliant
  • you have the choice to install APC or not ( underterminate progress bar VS determinate progress bar )
  • Customizable look as it uses dojo template mechanism. You can add your class / Ids in the te templates according to your css

have fun

How do I convert a Python 3 byte-string variable into a regular string?

How to filter (skip) non-UTF8 charachers from array?

To address this comment in @uname01's post and the OP, ignore the errors:

Code

>>> b'\x80abc'.decode("utf-8", errors="ignore")
'abc'

Details

From the docs, here are more examples using the same errors parameter:

>>> b'\x80abc'.decode("utf-8", "replace")
'\ufffdabc'
>>> b'\x80abc'.decode("utf-8", "backslashreplace")
'\\x80abc'
>>> b'\x80abc'.decode("utf-8", "strict")  
Traceback (most recent call last):
    ...
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 0:
  invalid start byte

The errors argument specifies the response when the input string can’t be converted according to the encoding’s rules. Legal values for this argument are 'strict' (raise a UnicodeDecodeError exception), 'replace' (use U+FFFD, REPLACEMENT CHARACTER), or 'ignore' (just leave the character out of the Unicode result).

Failed to allocate memory: 8

I had the same issue but before I got the issue it asked me to capture a video source.

I disabled the camera support and I was able to use 1024MB of RAM.

Using Windows 64bit, Xoom (Android 3.0).

What REST PUT/POST/DELETE calls should return by a convention?

I like Alfonso Tienda responce from HTTP status code for update and delete?

Here are some Tips:

DELETE

  • 200 (if you want send some additional data in the Response) or 204 (recommended).

  • 202 Operation deleted has not been committed yet.

  • If there's nothing to delete, use 204 or 404 (DELETE operation is idempotent, delete an already deleted item is operation successful, so you can return 204, but it's true that idempotent doesn't necessarily imply the same response)

Other errors:

  • 400 Bad Request (Malformed syntax or a bad query is strange but possible).
  • 401 Unauthorized Authentication failure
  • 403 Forbidden: Authorization failure or invalid Application ID.
  • 405 Not Allowed. Sure.
  • 409 Resource Conflict can be possible in complex systems.
  • And 501, 502 in case of errors.

PUT

If you're updating an element of a collection

  • 200/204 with the same reasons as DELETE above.
  • 202 if the operation has not been commited yet.

The referenced element doesn't exists:

  • PUT can be 201 (if you created the element because that is your behaviour)

  • 404 If you don't want to create elements via PUT.

  • 400 Bad Request (Malformed syntax or a bad query more common than in case of DELETE).

  • 401 Unauthorized

  • 403 Forbidden: Authentication failure or invalid Application ID.

  • 405 Not Allowed. Sure.

  • 409 Resource Conflict can be possible in complex systems, as in DELETE.

  • 422 Unprocessable entity It helps to distinguish between a "Bad request" (e.g. malformed XML/JSON) and invalid field values

  • And 501, 502 in case of errors.

Proper way to concatenate variable strings

Good question. But I think there is no good answer which fits your criteria. The best I can think of is to use an extra vars file.

A task like this:

- include_vars: concat.yml

And in concat.yml you have your definition:

newvar: "{{ var1 }}-{{ var2 }}-{{ var3 }}"

How to request Google to re-crawl my website?

There are two options. The first (and better) one is using the Fetch as Google option in Webmaster Tools that Mike Flynn commented about. Here are detailed instructions:

  1. Go to: https://www.google.com/webmasters/tools/ and log in
  2. If you haven't already, add and verify the site with the "Add a Site" button
  3. Click on the site name for the one you want to manage
  4. Click Crawl -> Fetch as Google
  5. Optional: if you want to do a specific page only, type in the URL
  6. Click Fetch
  7. Click Submit to Index
  8. Select either "URL" or "URL and its direct links"
  9. Click OK and you're done.

With the option above, as long as every page can be reached from some link on the initial page or a page that it links to, Google should recrawl the whole thing. If you want to explicitly tell it a list of pages to crawl on the domain, you can follow the directions to submit a sitemap.

Your second (and generally slower) option is, as seanbreeden pointed out, submitting here: http://www.google.com/addurl/

Update 2019:

  1. Login to - Google Search Console
  2. Add a site and verify it with the available methods.
  3. After verification from the console, click on URL Inspection.
  4. In the Search bar on top, enter your website URL or custom URLs for inspection and enter.
  5. After Inspection, it'll show an option to Request Indexing
  6. Click on it and GoogleBot will add your website in a Queue for crawling.

What does it mean when a PostgreSQL process is "idle in transaction"?

The PostgreSQL manual indicates that this means the transaction is open (inside BEGIN) and idle. It's most likely a user connected using the monitor who is thinking or typing. I have plenty of those on my system, too.

If you're using Slony for replication, however, the Slony-I FAQ suggests idle in transaction may mean that the network connection was terminated abruptly. Check out the discussion in that FAQ for more details.

How do I copy an object in Java?

Yes, you are just making a reference to the object. You can clone the object if it implements Cloneable.

Check out this wiki article about copying objects.

Refer here: Object copying

How to convert char to int?

int val = '1' - 48;

Delete duplicate elements from an array

It's easier using Array.filter:

var unique = arr.filter(function(elem, index, self) {
    return index === self.indexOf(elem);
})

Inserting created_at data with Laravel

In your User model, add the following line in the User class:

public $timestamps = true;

Now, whenever you save or update a user, Laravel will automatically update the created_at and updated_at fields.


Update:
If you want to set the created at manually you should use the date format Y-m-d H:i:s. The problem is that the format you have used is not the same as Laravel uses for the created_at field.

Update: Nov 2018 Laravel 5.6 "message": "Access level to App\\Note::$timestamps must be public", Make sure you have the proper access level as well. Laravel 5.6 is public.

Maven compile with multiple src directories

This also works with maven by defining the resources tag. You can name your src folder names whatever you like.

    <resources>
        <resource>
            <directory>src/main/java</directory>
            <includes>
                <include>**/*.java</include>
                <include>**/*.properties</include>
                <include>**/*.xml</include>
            </includes>
        </resource>

        <resource>
            <directory>src/main/resources</directory>
            <includes>
                <include>**/*.java</include>
                <include>**/*.properties</include>
                <include>**/*.xml</include>
            </includes>
        </resource>

        <resource>
            <directory>src/main/generated</directory>
            <includes>
                <include>**/*.java</include>
                <include>**/*.properties</include>
                <include>**/*.xml</include>
            </includes>
        </resource>
    </resources>

Tools for creating Class Diagrams

Some time ago I used DIA - free and platform-independent. It was ok. Now I use Enterprise Architect but it's not free.

List directory tree structure in python?

On top of dhobbs answer above (https://stackoverflow.com/a/9728478/624597), here is an extra functionality of storing results to a file (I personally use it to copy and paste to FreeMind to have a nice overview of the structure, therefore I used tabs instead of spaces for indentation):

import os

def list_files(startpath):

    with open("folder_structure.txt", "w") as f_output:
        for root, dirs, files in os.walk(startpath):
            level = root.replace(startpath, '').count(os.sep)
            indent = '\t' * 1 * (level)
            output_string = '{}{}/'.format(indent, os.path.basename(root))
            print(output_string)
            f_output.write(output_string + '\n')
            subindent = '\t' * 1 * (level + 1)
            for f in files:
                output_string = '{}{}'.format(subindent, f)
                print(output_string)
                f_output.write(output_string + '\n')

list_files(".")

nvm keeps "forgetting" node in new terminal session

nvm does its job by changing the PATH variable, so you need to make sure you aren't somehow changing your PATH to something else after sourcing the nvm.sh script.

In my case, nvm.sh was being called in .bashrc but then the PATH variable was getting updated in .bash_profile which caused my session to find the system node before the nvm node.

jQuery .css("margin-top", value) not updating in IE 8 (Standards mode)

Try marginTop in place of margin-top, eg:

$("#ActionBox").css("marginTop", foo);

UIImage resize (Scale proportion)

Try to make the bounds's size integer.

#include <math.h>
....

    if (ratio > 1) {
        bounds.size.width = resolution;
        bounds.size.height = round(bounds.size.width / ratio);
    } else {
        bounds.size.height = resolution;
        bounds.size.width = round(bounds.size.height * ratio);
    }

Difference between two DateTimes C#?

Try the following

double hours = (b-a).TotalHours;

If you just want the hour difference excluding the difference in days you can use the following

int hours = (b-a).Hours;

The difference between these two properties is mainly seen when the time difference is more than 1 day. The Hours property will only report the actual hour difference between the two dates. So if two dates differed by 100 years but occurred at the same time in the day, hours would return 0. But TotalHours will return the difference between in the total amount of hours that occurred between the two dates (876,000 hours in this case).

The other difference is that TotalHours will return fractional hours. This may or may not be what you want. If not, Math.Round can adjust it to your liking.

How to retry image pull in a kubernetes Pods?

If the Pod is part of a Deployment or Service, deleting it will restart the Pod and, potentially, place it onto another node:

$ kubectl delete po $POD_NAME

replace it if it's an individual Pod:

$ kubectl get po -n $namespace $POD_NAME -o yaml | kubectl replace -f -

How to search for a part of a word with ElasticSearch

If you want to implement autocomplete functionality, then Completion Suggester is the most neat solution. The next blog post contains a very clear description how this works.

In two words, it's an in-memory data structure called an FST which contains valid suggestions and is optimised for fast retrieval and memory usage. Essentially, it is just a graph. For instance, and FST containing the words hotel, marriot, mercure, munchen and munich would look like this:

enter image description here

How to print instances of a class using print()?

A prettier version of response by @user394430

class Element:
    def __init__(self, name, symbol, number):
        self.name = name
        self.symbol = symbol
        self.number = number

    def __str__(self):
        return  str(self.__class__) + '\n'+ '\n'.join(('{} = {}'.format(item, self.__dict__[item]) for item in self.__dict__))

elem = Element('my_name', 'some_symbol', 3)
print(elem)

Produces visually nice list of the names and values.

<class '__main__.Element'>
name = my_name
symbol = some_symbol
number = 3

An even fancier version (thanks Ruud) sorts the items:

def __str__(self):
    return  str(self.__class__) + '\n' + '\n'.join((str(item) + ' = ' + str(self.__dict__[item]) for item in sorted(self.__dict__)))

Laravel Eloquent Sum of relation's column

Also using query builder

DB::table("rates")->get()->sum("rate_value")

To get summation of all rate value inside table rates.

To get summation of user products.

DB::table("users")->get()->sum("products")

Speed tradeoff of Java's -Xms and -Xmx options

  1. Allocation always depends on your OS. If you allocate too much memory, you could end up having loaded portions into swap, which indeed is slow.
  2. Whether your program runs slower or faster depends on the references the VM has to handle and to clean. The GC doesn't have to sweep through the allocated memory to find abandoned objects. It knows it's objects and the amount of memory they allocate by reference mapping. So sweeping just depends on the size of your objects. If your program behaves the same in both cases, the only performance impact should be on VM startup, when the VM tries to allocate memory provided by your OS and if you use the swap (which again leads to 1.)

Lost connection to MySQL server at 'reading initial communication packet', system error: 0

Open mysql configuration file named my.cnf and try to find "bind-address", here replace the setting (127.0.0.1 OR localhost) with your live server ip (the ip you are using in mysql_connect function)

This will solve the problem definitely.

Thanks

Can I set background image and opacity in the same property?

I had a similar issue. I had an image and wanted to reduce the transparency and have a black background behind the image. Instead of reducing the opacity or creating a black background or any secondary div I set a linear-gradient to the image all on one line:

_x000D_
_x000D_
background: linear-gradient(to bottom, rgba(0, 0, 0, 0.8) 0%, rgba(0, 0, 0, 0.8) 100%), url("/img/picture.png");
_x000D_
_x000D_
_x000D_

Run Java Code Online

OpenCode appears to be a project at the MIT Media Lab for running Java Code online in a web browser interface. Years ago, I played around a lot at TopCoder. It runs a Java Web Start app, though, so you would need a Java run time installed.

How to handle an IF STATEMENT in a Mustache template?

In general, you use the # syntax:

{{#a_boolean}}
  I only show up if the boolean was true.
{{/a_boolean}}

The goal is to move as much logic as possible out of the template (which makes sense).

How do I move a file from one location to another in Java?

Java 6

public boolean moveFile(String sourcePath, String targetPath) {

    File fileToMove = new File(sourcePath);

    return fileToMove.renameTo(new File(targetPath));
}

Java 7 (Using NIO)

public boolean moveFile(String sourcePath, String targetPath) {

    boolean fileMoved = true;

    try {

        Files.move(Paths.get(sourcePath), Paths.get(targetPath), StandardCopyOption.REPLACE_EXISTING);

    } catch (Exception e) {

        fileMoved = false;
        e.printStackTrace();
    }

    return fileMoved;
}

Effective swapping of elements of an array in Java

This should make it seamless:

public static final <T> void swap (T[] a, int i, int j) {
  T t = a[i];
  a[i] = a[j];
  a[j] = t;
}

public static final <T> void swap (List<T> l, int i, int j) {
  Collections.<T>swap(l, i, j);
}

private void test() {
  String [] a = {"Hello", "Goodbye"};
  swap(a, 0, 1);
  System.out.println("a:"+Arrays.toString(a));
  List<String> l = new ArrayList<String>(Arrays.asList(a));
  swap(l, 0, 1);
  System.out.println("l:"+l);
}

How to upload image in CodeIgniter?

//this is the code you have to use in you controller 

        $config['upload_path'] = './uploads/';  

// directory (http://localhost/codeigniter/index.php/your directory)

        $config['allowed_types'] = 'gif|jpg|png|jpeg';  
//Image type  

        $config['max_size'] = 0;    

 // I have chosen max size no limit 
        $new_name = time() . '-' . $_FILES["txt_file"]['name']; 

//Added time function in image name for no duplicate image 

        $config['file_name'] = $new_name;

//Stored the new name into $config['file_name']

        $this->load->library('upload', $config);

        if (!$this->upload->do_upload() && !empty($_FILES['txt_file']['name'])) {
            $error = array('error' => $this->upload->display_errors());
            $this->load->view('production/create_images', $error);
        } else {
            $upload_data = $this->upload->data();   
        }

Formatting doubles for output in C#

The answer to this is simple and can be found on MSDN

Remember that a floating-point number can only approximate a decimal number, and that the precision of a floating-point number determines how accurately that number approximates a decimal number. By default, a Double value contains 15 decimal digits of precision, although a maximum of 17 digits is maintained internally.

In your example, the value of i is 6.89999999999999946709 which has the number 9 for all positions between the 3rd and the 16th digit (remember to count the integer part in the digits). When converting to string, the framework rounds the number to the 15th digit.

i     = 6.89999999999999 946709
digit =           111111 111122
        1 23456789012345 678901

MySQL Multiple Where Clause

May be using this query you don't get any result or empty result. You need to use OR instead of AND in your query like below.

$query = mysql_query("SELECT image_id FROM list WHERE (style_id = 24 AND style_value = 'red') OR (style_id = 25 AND style_value = 'big') OR (style_id = 27 AND style_value = 'round');

Try out this query.

Interface defining a constructor signature?

The purpose of an interface is to enforce a certain object signature. It should explicitly not be concerned with how an object works internally. Therefore, a constructor in an interface does not really make sense from a conceptual point of view.

There are some alternatives though:

  • Create an abstract class that acts as a minimal default implementation. That class should have the constructors you expect implementing classes to have.

  • If you don't mind the overkill, use the AbstractFactory pattern and declare a method in the factory class interface that has the required signatures.

  • Pass the GraphicsDeviceManager as a parameter to the Update and Draw methods.

  • Use a Compositional Object Oriented Programming framework to pass the GraphicsDeviceManager into the part of the object that requires it. This is a pretty experimental solution in my opinion.

The situation you describe is not easy to handle in general. A similar case would be entities in a business application that require access to the database.

macro for Hide rows in excel 2010

Well, you're on the right path, Benno!

There are some tips regarding VBA programming that might help you out.

  1. Use always explicit references to the sheet you want to interact with. Otherwise, Excel may 'assume' your code applies to the active sheet and eventually you'll see it screws your spreadsheet up.

  2. As lionz mentioned, get in touch with the native methods Excel offers. You might use them on most of your tricks.

  3. Explicitly declare your variables... they'll show the list of methods each object offers in VBA. It might save your time digging on the internet.

Now, let's have a draft code...

Remember this code must be within the Excel Sheet object, as explained by lionz. It only applies to Sheet 2, is up to you to adapt it to both Sheet 2 and Sheet 3 in the way you prefer.

Hope it helps!

Private Sub Worksheet_Change(ByVal Target As Range)

    Dim oSheet As Excel.Worksheet

    'We only want to do something if the changed cell is B6, right?
    If Target.Address = "$B$6" Then

        'Checks if it's a number...
        If IsNumeric(Target.Value) Then

            'Let's avoid values out of your bonds, correct?
            If Target.Value > 0 And Target.Value < 51 Then

                'Let's assign the worksheet we'll show / hide rows to one variable and then
                '   use only the reference to the variable itself instead of the sheet name.
                '   It's safer.

                'You can alternatively replace 'sheet 2' by 2 (without quotes) which will represent
                '   the sheet index within the workbook
                Set oSheet = ActiveWorkbook.Sheets("Sheet 2")

                'We'll unhide before hide, to ensure we hide the correct ones
                oSheet.Range("A7:A56").EntireRow.Hidden = False

                oSheet.Range("A" & Target.Value + 7 & ":A56").EntireRow.Hidden = True

            End If

        End If

    End If

End Sub

How to get the GL library/headers?

In Visual Studio :

//OpenGL
#pragma comment(lib, "opengl32")
#pragma comment(lib, "glu32")
#include <gl/gl.h>
#include <gl/glu.h>

Headers are in the SDK : C:\Program Files\Microsoft SDKs\Windows\v7.0A\Include\gl

Get the Query Executed in Laravel 3/4

Using the query log doesnt give you the actual RAW query being executed, especially if there are bound values. This is the best approach to get the raw sql:

DB::table('tablename')->toSql();

or more involved:

$query = Article::whereIn('author_id', [1,2,3])->orderBy('published', 'desc')->toSql();
dd($query);

How to call a REST web service API from JavaScript?

Without a doubt, the simplest method uses an invisible FORM element in HTML specifying the desired REST method. Then the arguments can be inserted into input type=hidden value fields using JavaScript and the form can be submitted from the button click event listener or onclick event using one line of JavaScript. Here is an example that assumes the REST API is in file REST.php:

<body>
<h2>REST-test</h2>
<input type=button onclick="document.getElementById('a').submit();"
    value="Do It">
<form id=a action="REST.php" method=post>
<input type=hidden name="arg" value="val">
</form>
</body>

Note that this example will replace the page with the output from page REST.php. I'm not sure how to modify this if you wish the API to be called with no visible effect on the current page. But it's certainly simple.

How to show Error & Warning Message Box in .NET/ How to Customize MessageBox

Try this:

MessageBox.Show("Some text", "Some title", 
    MessageBoxButtons.OK, MessageBoxIcon.Error);

How to run a python script from IDLE interactive shell?

execFile('helloworld.py') does the job for me. A thing to note is to enter the complete directory name of the .py file if it isnt in the Python folder itself (atleast this is the case on Windows)

For example, execFile('C:/helloworld.py')

git checkout tag, git pull fails in branch

What worked for me was: git branch --set-upstream-to=origin master When I did a pull again I only got the updates from master and the warning went away.

How to run docker-compose up -d at system start up?

I tried restart: always, it works at some containers(like php-fpm), but i faced the problem that some containers(like nginx) is still not restarting after reboot.

Solved the problem.

crontab -e

@reboot (sleep 30s ; cd directory_has_dockercomposeyml ; /usr/local/bin/docker-compose up -d )&

Hex-encoded String to Byte Array

Java SE 6 or Java EE 5 provides a method to do this now so there is no need for extra libraries.

The method is DatatypeConverter.parseHexBinary

In this case it can be used as follows:

String str = "9B7D2C34A366BF890C730641E6CECF6F";
byte[] bytes = DatatypeConverter.parseHexBinary(str);

The class also provides type conversions for many other formats that are generally used in XML.

Adding files to java classpath at runtime

The way I have done this is by using my own class loader

URLClassLoader urlClassLoader = (URLClassLoader) ClassLoader.getSystemClassLoader();
DynamicURLClassLoader dynalLoader = new DynamicURLClassLoader(urlClassLoader);

And create the following class:

public class DynamicURLClassLoader extends URLClassLoader {

    public DynamicURLClassLoader(URLClassLoader classLoader) {
        super(classLoader.getURLs());
    }

    @Override
    public void addURL(URL url) {
        super.addURL(url);
    }
}

Works without any reflection

How to reset (clear) form through JavaScript?

You can clear the whole form using onclick function.Here is the code for it.

 <button type="reset" value="reset" type="reset" class="btnreset" onclick="window.location.reload()">Reset</button>

window.location.reload() function will refresh your page and all data will clear.

nginx - read custom header from upstream server

I was facing the same issue. I tried both $http_my_custom_header and $sent_http_my_custom_header but it did not work for me.

Although solved this issue by using $upstream_http_my_custom_header.

C pointer to array/array of pointers disambiguation

The answer for the last two can also be deducted from the golden rule in C:

Declaration follows use.

int (*arr2)[8];

What happens if you dereference arr2? You get an array of 8 integers.

int *(arr3[8]);

What happens if you take an element from arr3? You get a pointer to an integer.

This also helps when dealing with pointers to functions. To take sigjuice's example:

float *(*x)(void )

What happens when you dereference x? You get a function that you can call with no arguments. What happens when you call it? It will return a pointer to a float.

Operator precedence is always tricky, though. However, using parentheses can actually also be confusing because declaration follows use. At least, to me, intuitively arr2 looks like an array of 8 pointers to ints, but it is actually the other way around. Just takes some getting used to. Reason enough to always add a comment to these declarations, if you ask me :)

edit: example

By the way, I just stumbled across the following situation: a function that has a static matrix and that uses pointer arithmetic to see if the row pointer is out of bounds. Example:

#include <stdio.h>
#include <stdlib.h>
#include <string.h>

#define NUM_ELEM(ar) (sizeof(ar) / sizeof((ar)[0]))

int *
put_off(const int newrow[2])
{
    static int mymatrix[3][2];
    static int (*rowp)[2] = mymatrix;
    int (* const border)[] = mymatrix + NUM_ELEM(mymatrix);

    memcpy(rowp, newrow, sizeof(*rowp));
    rowp += 1;
    if (rowp == border) {
        rowp = mymatrix;
    }

    return *rowp;
}

int
main(int argc, char *argv[])
{
    int i = 0;
    int row[2] = {0, 1};
    int *rout;

    for (i = 0; i &lt; 6; i++) {
        row[0] = i;
        row[1] += i;
        rout = put_off(row);
        printf("%d (%p): [%d, %d]\n", i, (void *) rout, rout[0], rout[1]);
    }

    return 0;
}

Output:

0 (0x804a02c): [0, 0]
1 (0x804a034): [0, 0]
2 (0x804a024): [0, 1]
3 (0x804a02c): [1, 2]
4 (0x804a034): [2, 4]
5 (0x804a024): [3, 7]

Note that the value of border never changes, so the compiler can optimize that away. This is different from what you might initially want to use: const int (*border)[3]: that declares border as a pointer to an array of 3 integers that will not change value as long as the variable exists. However, that pointer may be pointed to any other such array at any time. We want that kind of behaviour for the argument, instead (because this function does not change any of those integers). Declaration follows use.

(p.s.: feel free to improve this sample!)

How to check which PHP extensions have been enabled/disabled in Ubuntu Linux 12.04 LTS?

You can view which modules (compiled in) are available via terminal through php -m

How to sleep the thread in node.js without affecting other threads?

When working with async functions or observables provided by 3rd party libraries, for example Cloud firestore, I've found functions the waitFor method shown below (TypeScript, but you get the idea...) to be helpful when you need to wait on some process to complete, but you don't want to have to embed callbacks within callbacks within callbacks nor risk an infinite loop.

This method is sort of similar to a while (!condition) sleep loop, but yields asynchronously and performs a test on the completion condition at regular intervals till true or timeout.

export const sleep = (ms: number) => {
    return new Promise(resolve => setTimeout(resolve, ms))
}
/**
 * Wait until the condition tested in a function returns true, or until 
 * a timeout is exceeded.
 * @param interval The frenequency with which the boolean function contained in condition is called.
 * @param timeout  The maximum time to allow for booleanFunction to return true
 * @param booleanFunction:  A completion function to evaluate after each interval. waitFor will return true as soon as the completion function returns true.   
 */
export const waitFor = async function (interval: number, timeout: number,
    booleanFunction: Function): Promise<boolean> {
    let elapsed = 1;
    if (booleanFunction()) return true;
    while (elapsed < timeout) {
        elapsed += interval;
        await sleep(interval);
        if (booleanFunction()) {
            return true;
        }
    }
    return false;
}

The say you have a long running process on your backend you want to complete before some other task is undertaken. For example if you have a function that totals a list of accounts, but you want to refresh the accounts from the backend before you calculate, you can do something like this:

async recalcAccountTotals() : number {
     this.accountService.refresh();   //start the async process.
     if (this.accounts.dirty) {
           let updateResult = await waitFor(100,2000,()=> {return !(this.accounts.dirty)})
     }
 if(!updateResult) { 
      console.error("Account refresh timed out, recalc aborted");
      return NaN;
    }
 return ... //calculate the account total. 
}

How to determine if a string is a number with C++?

bool is_number(const string& s, bool is_signed)
{
    if (s.empty()) return false;

    if (is_signed && (s.front() == '+' || s.front() == '-'))
    {
        return is_number(s.substr(1, s.length() - 1), false);
    }

    auto non_digit = std::find_if(s.begin(), s.end(), [](const char& c) { return !std::isdigit(c); });
    return non_digit == s.end();
}

How to ssh from within a bash script?

If you want to continue to use passwords and not use key exchange then you can accomplish this with 'expect' like so:

#!/usr/bin/expect -f
spawn ssh user@hostname
expect "password:"
sleep 1
send "<your password>\r"
command1
command2
commandN

Using XAMPP, how do I swap out PHP 5.3 for PHP 5.2?

I know this doesn't help you, but I have to say that this is one of the reasons I jumped from XAMPP to WampServer. WampServer lets you install multiple versions of PHP, Apache and/or MySQL, and switch between them all via a menu option.

ASP.NET MVC - Getting QueryString values

Query string parameters can be accepted simply by using an argument on the action - i.e.

public ActionResult Foo(string someValue, int someOtherValue) {...}

which will accept a query like .../someroute?someValue=abc&someOtherValue=123

Other than that, you can look at the request directly for more control.

How can I make the browser wait to display the page until it's fully loaded?

You could start by having your site's main index page contain only a message saying "Loading". From here you could use ajax to fetch the contents of your next page and embed it into the current one, on completion removing the "Loading" message.

You might be able to get away with just including a loading message container at the top of your page which is 100% width/height and then removing the said div on load complete.

The latter may not work perfectly in both situations and will depends on how the browser renders content.

I'm not entirely sure if its a good idea. For simple static sites I would say not. However, I have seen a lot of heavy javascript sites lately from design companies that use complex animation and are one page. These sites use a loading message to wait for all scripts/html/css to be loaded so that the page functions as expected.

How do I set the time zone of MySQL?

Simply run this on your MySQL server:

SET GLOBAL time_zone = '+8:00';

Where +8:00 will be your time zone.

Visual Studio Error: (407: Proxy Authentication Required)

While running Visual Studio 2012 behind a proxy, I received the following error message when checking for extension updates in the Visual Studio Gallery:

The remote server returned an unexpected response: (417) Expectation failed

A look around Google finally revealed a solution here:

Visual Studio 2012 Proxy Settings

http://www.jlpaonline.com/?p=176

Basically, he's saying the fix is to edit your devenv.exe.config file and change this:

<settings>      
   <ipv6 enabled="true"/> 
</settings>

to this:

 <settings>
   <ipv6 enabled="true"/>      
   <servicePointManager expect100Continue="false"/> 
 </settings> 

Catch error if iframe src fails to load . Error :-"Refused to display 'http://www.google.co.in/' in a frame.."

I have the same problem, chrome can't fire onerror when iframe src load error.

but in my case, I just need make a cross domain http request, so I use script src/onload/onerror instead of iframe. it's working well.

How do you uninstall all dependencies listed in package.json (NPM)?

This worked for me:

command prompt or gitbash into the node_modules folder in your project then execute:

npm uninstall *

Removed all of the local packages for that project.

Get the last insert id with doctrine 2?

I had to use this after the flush to get the last insert id:

$em->persist($user);
$em->flush();
$user->getId();

Why does writeObject throw java.io.NotSerializableException and how do I fix it?

The fields of your object have in turn their fields, some of which do not implement Serializable. In your case the offending class is TransformGroup. How to solve it?

  • if the class is yours, make it Serializable
  • if the class is 3rd party, but you don't need it in the serialized form, mark the field as transient
  • if you need its data and it's third party, consider other means of serialization, like JSON, XML, BSON, MessagePack, etc. where you can get 3rd party objects serialized without modifying their definitions.

window.onunload is not working properly in Chrome browser. Can any one help me?

This works :

var unloadEvent = function (e) {
    var confirmationMessage = "Warning: Leaving this page will result in any unsaved data being lost. Are you sure you wish to continue?";
    (e || window.event).returnValue = confirmationMessage; //Gecko + IE
    return confirmationMessage; //Webkit, Safari, Chrome etc.
};
window.addEventListener("beforeunload", unloadEvent);

How to make PDF file downloadable in HTML link?

This is a common issue but few people know there's a simple HTML 5 solution:

<a href="./directory/yourfile.pdf" download="newfilename">Download the pdf</a>

Where newfilename is the suggested filename for the user to save the file. Or it will default to the filename on the serverside if you leave it empty, like this:

<a href="./directory/yourfile.pdf" download>Download the pdf</a>

Compatibility: I tested this on Firefox 21 and Iron, both worked fine. It might not work on HTML5-incompatible or outdated browsers. The only browser I tested that didn't force download is IE...

Check compatibility here: http://caniuse.com/#feat=download

What is the location of mysql client ".my.cnf" in XAMPP for Windows?

Look in the MySQL config file C:\xampp\mysql\bin\my.ini.

At the top of that file are some comments:

# You can copy this file to
# C:/xampp/mysql/bin/my.cnf to set global options,
# mysql-data-dir/my.cnf to set server-specific options (in this
# installation this directory is C:/xampp/mysql/data) or
# ~/.my.cnf to set user-specific options.

There it tells you where to find your .my.cnf file.

How to randomly pick an element from an array

use java.util.Random to generate a random number between 0 and array length: random_number, and then use the random number to get the integer: array[random_number]

iPhone UILabel text soft shadow

I advise you to use the shadowColor and shadowOffset properties of UILabel:

UILabel* label = [[UILabel alloc] init];
label.shadowColor = [UIColor whiteColor];
label.shadowOffset = CGSizeMake(0,1);

How does HTTP file upload work?

I have this sample Java Code:

import java.io.*;
import java.net.*;
import java.nio.charset.StandardCharsets;

public class TestClass {
    public static void main(String[] args) throws IOException {
        ServerSocket socket = new ServerSocket(8081);
        Socket accept = socket.accept();
        InputStream inputStream = accept.getInputStream();

        InputStreamReader inputStreamReader = new InputStreamReader(inputStream, StandardCharsets.UTF_8);
        char readChar;
        while ((readChar = (char) inputStreamReader.read()) != -1) {
            System.out.print(readChar);
        }

        inputStream.close();
        accept.close();
        System.exit(1);
    }
}

and I have this test.html file:

<!DOCTYPE html>
<html>
<head>
    <meta charset="UTF-8">
    <title>File Upload!</title>
</head>
<body>
<form method="post" action="http://localhost:8081" enctype="multipart/form-data">
    <input type="file" name="file" id="file">
    <input type="submit">
</form>
</body>
</html>

and finally the file I will be using for testing purposes, named a.dat has the following content:

0x39 0x69 0x65

if you interpret the bytes above as ASCII or UTF-8 characters, they will actually will be representing:

9ie

So let 's run our Java Code, open up test.html in our favorite browser, upload a.dat and submit the form and see what our server receives:

POST / HTTP/1.1
Host: localhost:8081
Connection: keep-alive
Content-Length: 196
Cache-Control: max-age=0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
Origin: null
Upgrade-Insecure-Requests: 1
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/48.0.2564.97 Safari/537.36
Content-Type: multipart/form-data; boundary=----WebKitFormBoundary06f6g54NVbSieT6y
DNT: 1
Accept-Encoding: gzip, deflate
Accept-Language: en,en-US;q=0.8,tr;q=0.6
Cookie: JSESSIONID=27D0A0637A0449CF65B3CB20F40048AF

------WebKitFormBoundary06f6g54NVbSieT6y
Content-Disposition: form-data; name="file"; filename="a.dat"
Content-Type: application/octet-stream

9ie
------WebKitFormBoundary06f6g54NVbSieT6y--

Well I am not surprised to see the characters 9ie because we told Java to print them treating them as UTF-8 characters. You may as well choose to read them as raw bytes..

Cookie: JSESSIONID=27D0A0637A0449CF65B3CB20F40048AF 

is actually the last HTTP Header here. After that comes the HTTP Body, where meta and contents of the file we uploaded actually can be seen.

How to validate an Email in PHP?

User data is very important for a good developer, so don't ask again and again for same data, use some logic to correct some basic error in data.

Before validation of Email: First you have to remove all illegal characters from email.

//This will Remove all illegal characters from email
$email = filter_var($email, FILTER_SANITIZE_EMAIL);

after that validate your email address using this filter_var() function.

filter_var($email, FILTER_VALIDATE_EMAIL)) // To Validate the email

For e.g.

<?php
$email = "[email protected]";

// Remove all illegal characters from email
$email = filter_var($email, FILTER_SANITIZE_EMAIL);

// Validate email
if (filter_var($email, FILTER_VALIDATE_EMAIL)) {
    echo $email." is a valid email address";
} else {
    echo $email." is not a valid email address";
}
?>

PHP Deprecated: Methods with the same name

As mentioned in the error, the official manual and the comments:

Replace

public function TSStatus($host, $queryPort)

with

public function __construct($host, $queryPort)

How to Set Selected value in Multi-Value Select in Jquery-Select2.?

Using select2 jquery library:

$('#selector').val(arrayOfValues).trigger('change')

Download File Using jQuery

By stating window.location.href = 'uploads/file.doc'; you show where you store your files. You might of course use .htacess to force the required behaviour for stored files, but this might not always be handful....

It is better to create a server side php-file and place this content in it:

header('Content-Type: application/octet-stream');
header('Content-Disposition: attachment; filename='.$_REQUEST['f']);
readfile('../some_folder/some_subfolder/'.$_REQUEST['f']); 
exit;

This code will return ANY file as a download without showing where you actually store it.

You open this php-file via window.location.href = 'scripts/this_php_file.php?f=downloaded_file';

RuntimeError: module compiled against API version a but this version of numpy is 9

You may also want to check your $PYTHONPATH. I had changed mine in ~/.bashrc in order to get another package to work.

To check your path:

    echo $PYTHONPATH

To change your path (I use nano but you could edit another way)

    nano ~/.bashrc

Look for the line with export PYTHONPATH ...

After making changes, don't forget to

   source ~/.bashrc

Python for and if on one line

In python3 the variable i will be out of scope when you try to print it.

To get the value you want you should store the result of your operation inside a new variable:

my_list = ["one","two","three"]
result=[(i) for i in my_list if i=="two"]
print(result)

you will then the following output

['two']

Reverse HashMap keys and values in Java

They all are unique, yes

If you're sure that your values are unique you can iterate over the entries of your old map .

Map<String, Character> myNewHashMap = new HashMap<>();
for(Map.Entry<Character, String> entry : myHashMap.entrySet()){
    myNewHashMap.put(entry.getValue(), entry.getKey());
}

Alternatively, you can use a Bi-Directional map like Guava provides and use the inverse() method :

BiMap<Character, String> myBiMap = HashBiMap.create();
myBiMap.put('a', "test one");
myBiMap.put('b', "test two");

BiMap<String, Character> myBiMapInversed = myBiMap.inverse();

As is out, you can also do it this way :

Map<String, Integer> map = new HashMap<>();
map.put("a",1);
map.put("b",2);

Map<Integer, String> mapInversed = 
    map.entrySet()
       .stream()
       .collect(Collectors.toMap(Map.Entry::getValue, Map.Entry::getKey))

Finally, I added my contribution to the proton pack library, which contains utility methods for the Stream API. With that you could do it like this:

Map<Character, String> mapInversed = MapStream.of(map).inverseMapping().collect();

Is it a good idea to index datetime field in mysql?

MySQL recommends using indexes for a variety of reasons including elimination of rows between conditions: http://dev.mysql.com/doc/refman/5.0/en/mysql-indexes.html

This makes your datetime column an excellent candidate for an index if you are going to be using it in conditions frequently in queries. If your only condition is BETWEEN NOW() AND DATE_ADD(NOW(), INTERVAL 30 DAY) and you have no other index in the condition, MySQL will have to do a full table scan on every query. I'm not sure how many rows are generated in 30 days, but as long as it's less than about 1/3 of the total rows it will be more efficient to use an index on the column.

Your question about creating an efficient database is very broad. I'd say to just make sure that it's normalized and all appropriate columns are indexed (i.e. ones used in joins and where clauses).

Checking character length in ruby

Verification, do not forget the to_s

 def nottolonng?(value)
   if value.to_s.length <=8
     return true
   else
     return false
   end
  end

JavaScript, get date of the next day

Using Date object guarantees that. For eg if you try to create April 31st :

new Date(2014,3,31)        // Thu May 01 2014 00:00:00

Please note that it's zero indexed, so Jan. is 0, Feb. is 1 etc.

How to convert datatype:object to float64 in python?

X = np.array(X, dtype=float)

You can use this to convert to array of float in python 3.7.6

Catching access violation exceptions?

There is a very easy way to catch any kind of exception (division by zero, access violation, etc.) in Visual Studio using try -> catch (...) block. A minor project settings tweaking is enough. Just enable /EHa option in the project settings. See Project Properties -> C/C++ -> Code Generation -> Modify the Enable C++ Exceptions to "Yes With SEH Exceptions". That's it!

See details here: http://msdn.microsoft.com/en-us/library/1deeycx5(v=vs.80).aspx

How do you create a temporary table in an Oracle database?

CREATE TABLE table_temp_list_objects AS
SELECT o.owner, o.object_name FROM sys.all_objects o WHERE o.object_type ='TABLE';

Most efficient way to concatenate strings?

The StringBuilder.Append() method is much better than using the + operator. But I've found that, when executing 1000 concatenations or less, String.Join() is even more efficient than StringBuilder.

StringBuilder sb = new StringBuilder();
sb.Append(someString);

The only problem with String.Join is that you have to concatenate the strings with a common delimiter.

Edit: as @ryanversaw pointed out, you can make the delimiter string.Empty.

string key = String.Join("_", new String[] 
{ "Customers_Contacts", customerID, database, SessionID });

An example of how to use getopts in bash

#!/bin/bash

usage() { echo "Usage: $0 [-s <45|90>] [-p <string>]" 1>&2; exit 1; }

while getopts ":s:p:" o; do
    case "${o}" in
        s)
            s=${OPTARG}
            ((s == 45 || s == 90)) || usage
            ;;
        p)
            p=${OPTARG}
            ;;
        *)
            usage
            ;;
    esac
done
shift $((OPTIND-1))

if [ -z "${s}" ] || [ -z "${p}" ]; then
    usage
fi

echo "s = ${s}"
echo "p = ${p}"

Example runs:

$ ./myscript.sh
Usage: ./myscript.sh [-s <45|90>] [-p <string>]

$ ./myscript.sh -h
Usage: ./myscript.sh [-s <45|90>] [-p <string>]

$ ./myscript.sh -s "" -p ""
Usage: ./myscript.sh [-s <45|90>] [-p <string>]

$ ./myscript.sh -s 10 -p foo
Usage: ./myscript.sh [-s <45|90>] [-p <string>]

$ ./myscript.sh -s 45 -p foo
s = 45
p = foo

$ ./myscript.sh -s 90 -p bar
s = 90
p = bar

List(of String) or Array or ArrayList

You can do something like this,

  Dim lstOfStrings As New List(Of String) From {"Value1", "Value2", "Value3"}

Collection Initializers

Marquee text in Android

To get this to work, I had to use ALL three of the things (ellipsize, selected, and singleLine) mentioned already:

TextView tv = (TextView)findViewById(R.id.someTextView);
tv.setSelected(true);
tv.setEllipsize(TruncateAt.MARQUEE);
tv.setSingleLine(true):

How to get document height and width without using jquery

You can try also:

document.body.offsetHeight
document.body.offsetWidth

Python strftime - date without leading 0?

>>> import datetime
>>> d = datetime.datetime.now()
>>> d.strftime('X%d/X%m/%Y').replace('X0','X').replace('X','')
'5/5/2011'

How to set full calendar to a specific start date when it's initialized for the 1st time?

You should use the options 'year', 'month', and 'date' when initializing to specify the initial date value used by fullcalendar:

$('#calendar').fullCalendar({
 year: 2012,
 month: 4,
 date: 25
});  // This will initialize for May 25th, 2012.

See the function setYMD(date,y,m,d) in the fullcalendar.js file; note that the JavaScript setMonth, setDate, and setFullYear functions are used, so your month value needs to be 0-based (Jan is 0).

UPDATE: As others have noted in the comments, the correct way now (V3 as of writing this edit) is to initialize the defaultDate property to a value that is

anything the Moment constructor accepts, including an ISO8601 date string like "2014-02-01"

as it uses Moment.js. Documentation here.

Updated example:

$('#calendar').fullCalendar({
    defaultDate: "2012-05-25"
});  // This will initialize for May 25th, 2012.

compare two list and return not matching items using linq

The naive approach:

MsgList.Where(x => !SentList.Any(y => y.MsgID == x.MsgID))

Be aware this will take up to m*n operations as it compares every MsgID in SentList to each in MsgList ("up to" because it will short-circuit when it does happen to match).

How to disable scrolling the document body?

I know this is an ancient question, but I just thought that I'd weigh in.

I'm using disableScroll. Simple and it works like in a dream.

I have had some trouble disabling scroll on body, but allowing it on child elements (like a modal or a sidebar). It looks like that something can be done using disableScroll.on([element], [options]);, but I haven't gotten that to work just yet.


The reason that this is prefered compared to overflow: hidden; on body is that the overflow-hidden can get nasty, since some things might add overflow: hidden; like this:

... This is good for preloaders and such, since that is rendered before the CSS is finished loading.

But it gives problems, when an open navigation should add a class to the body-tag (like <body class="body__nav-open">). And then it turns into one big tug-of-war with overflow: hidden; !important and all kinds of crap.

How can I revert multiple Git commits (already pushed) to a published repository?

If you've already pushed things to a remote server (and you have other developers working off the same remote branch) the important thing to bear in mind is that you don't want to rewrite history

Don't use git reset --hard

You need to revert changes, otherwise any checkout that has the removed commits in its history will add them back to the remote repository the next time they push; and any other checkout will pull them in on the next pull thereafter.

If you have not pushed changes to a remote, you can use

git reset --hard <hash>

If you have pushed changes, but are sure nobody has pulled them you can use

git reset --hard
git push -f

If you have pushed changes, and someone has pulled them into their checkout you can still do it but the other team-member/checkout would need to collaborate:

(you) git reset --hard <hash>
(you) git push -f

(them) git fetch
(them) git reset --hard origin/branch

But generally speaking that's turning into a mess. So, reverting:

The commits to remove are the lastest

This is possibly the most common case, you've done something - you've pushed them out and then realized they shouldn't exist.

First you need to identify the commit to which you want to go back to, you can do that with:

git log

just look for the commit before your changes, and note the commit hash. you can limit the log to the most resent commits using the -n flag: git log -n 5

Then reset your branch to the state you want your other developers to see:

git revert  <hash of first borked commit>..HEAD

The final step is to create your own local branch reapplying your reverted changes:

git branch my-new-branch
git checkout my-new-branch
git revert <hash of each revert commit> .

Continue working in my-new-branch until you're done, then merge it in to your main development branch.

The commits to remove are intermingled with other commits

If the commits you want to revert are not all together, it's probably easiest to revert them individually. Again using git log find the commits you want to remove and then:

git revert <hash>
git revert <another hash>
..

Then, again, create your branch for continuing your work:

git branch my-new-branch
git checkout my-new-branch
git revert <hash of each revert commit> .

Then again, hack away and merge in when you're done.

You should end up with a commit history which looks like this on my-new-branch

2012-05-28 10:11 AD7six             o [my-new-branch] Revert "Revert "another mistake""
2012-05-28 10:11 AD7six             o Revert "Revert "committing a mistake""
2012-05-28 10:09 AD7six             o [master] Revert "committing a mistake"
2012-05-28 10:09 AD7six             o Revert "another mistake"
2012-05-28 10:08 AD7six             o another mistake
2012-05-28 10:08 AD7six             o committing a mistake
2012-05-28 10:05 Bob                I XYZ nearly works

Better way®

Especially that now that you're aware of the dangers of several developers working in the same branch, consider using feature branches always for your work. All that means is working in a branch until something is finished, and only then merge it to your main branch. Also consider using tools such as git-flow to automate branch creation in a consistent way.

combining two data frames of different lengths

Refering to Andrie's answer, suggesting to use plyr::rbind.fill(): Combined with t() you have something like cbind.fill() (which is not part of plyr) that will construct your data frame with consideration of identical case numbers.

Is gcc's __attribute__((packed)) / #pragma pack unsafe?

(The following is a very artificial example cooked up to illustrate.) One major use of packed structs is where you have a stream of data (say 256 bytes) to which you wish to supply meaning. If I take a smaller example, suppose I have a program running on my Arduino which sends via serial a packet of 16 bytes which have the following meaning:

0: message type (1 byte)
1: target address, MSB
2: target address, LSB
3: data (chars)
...
F: checksum (1 byte)

Then I can declare something like

typedef struct {
  uint8_t msgType;
  uint16_t targetAddr; // may have to bswap
  uint8_t data[12];
  uint8_t checksum;
} __attribute__((packed)) myStruct;

and then I can refer to the targetAddr bytes via aStruct.targetAddr rather than fiddling with pointer arithmetic.

Now with alignment stuff happening, taking a void* pointer in memory to the received data and casting it to a myStruct* will not work unless the compiler treats the struct as packed (that is, it stores data in the order specified and uses exactly 16 bytes for this example). There are performance penalties for unaligned reads, so using packed structs for data your program is actively working with is not necessarily a good idea. But when your program is supplied with a list of bytes, packed structs make it easier to write programs which access the contents.

Otherwise you end up using C++ and writing a class with accessor methods and stuff that does pointer arithmetic behind the scenes. In short, packed structs are for dealing efficiently with packed data, and packed data may be what your program is given to work with. For the most part, you code should read values out of the structure, work with them, and write them back when done. All else should be done outside the packed structure. Part of the problem is the low level stuff that C tries to hide from the programmer, and the hoop jumping that is needed if such things really do matter to the programmer. (You almost need a different 'data layout' construct in the language so that you can say 'this thing is 48 bytes long, foo refers to the data 13 bytes in, and should be interpreted thus'; and a separate structured data construct, where you say 'I want a structure containing two ints, called alice and bob, and a float called carol, and I don't care how you implement it' -- in C both these use cases are shoehorned into the struct construct.)

How to use mongoimport to import csv

When I was trying to import the CSV file, I was getting an error. What I have done. First I changed the header line's column names in Capital letter and removed "-" and added "_" if needed. Then Typed below command for importing CSV into mongo

$ mongoimport --db=database_name --collection=collection_name --type=csv --file=file_name.csv --headerline  

enter image description here

How do I add button on each row in datatable?

var table =$('#example').DataTable( {
    data: yourdata ,
    columns: [
        { data: "id" },
        { data: "name" },
        { data: "parent" },
        { data: "date" },

        {data: "id" , render : function ( data, type, row, meta ) {
              return type === 'display'  ?
                '<a href="<?php echo $delete_url;?>'+ data +'" ><i class="fe fe-delete"></i></a>' :
                data;
            }},
    ],
    }
}

Nginx 403 forbidden for all files

Old question, but I had the same issue. I tried every answer above, nothing worked. What fixed it for me though was removing the domain, and adding it again. I'm using Plesk, and I installed Nginx AFTER the domain was already there.

Did a local backup to /var/www/backups first though. So I could easily copy back the files.

Strange problem....

Select row and element in awk

Since awk and perl are closely related...


Perl equivalents of @Dennis's awk solutions:

To print the second line:
perl -ne 'print if $. == 2' file

To print the second field:
perl -lane 'print $F[1]' file

To print the third field of the fifth line:
perl -lane 'print $F[2] if $. == 5' file


Perl equivalent of @Glenn's solution:

Print the j'th field of the i'th line

perl -lanse 'print $F[$j-1] if $. == $i' -- -i=5 -j=3 file


Perl equivalents of @Hai's solutions:

if you are looking for second columns that contains abc:

perl -lane 'print if $F[1] =~ /abc/' foo

... and if you want to print only a particular column:

perl -lane 'print $F[2] if $F[1] =~ /abc/' foo

... and for a particular line number:

perl -lane 'print $F[2] if $F[1] =~ /abc/ && $. == 5' foo


-l removes newlines, and adds them back in when printing
-a autosplits the input line into array @F, using whitespace as the delimiter
-n loop over each line of the input file
-e execute the code within quotes
$F[1] is the second element of the array, since Perl starts at 0
$. is the line number

Jackson serialization: ignore empty values (or null)

You can also set the global option:

objectMapper.setSerializationInclusion(JsonInclude.Include.NON_NULL);

Disabled href tag

You can use:

<a href="/" onclick="return false;">123n</a>

How to use RecyclerView inside NestedScrollView?

This is what working for me

<android.support.v4.widget.NestedScrollView
    android:layout_width="match_parent"
    android:layout_height="match_parent"
    android:fillViewport="true"
    app:layout_behavior="@string/appbar_scrolling_view_behavior">

    <android.support.v7.widget.RecyclerView
        android:id="@+id/rv_recycler_view"
        android:layout_width="match_parent"
        android:layout_height="wrap_content"
        app:layout_behavior="@string/appbar_scrolling_view_behavior"/>
</android.support.v4.widget.NestedScrollView>

Efficient way to insert a number into a sorted array of numbers?

Here's a version that uses lodash.

const _ = require('lodash');
sortedArr.splice(_.sortedIndex(sortedArr,valueToInsert) ,0,valueToInsert);

note: sortedIndex does a binary search.

JOptionPane - input dialog box program

One solution is to actually use an integer array instead of separate test strings:

You could loop parse the response from JOptionPane.showInputDialog into the individual elements of the array.

Arrays.sort could be used to sort them to allow you to pick out the 2 highest values.

The average can be easily calculated then by adding these 2 values & dividing by 2.

int[] testScore = new int[3];

for (int i = 0; i < testScore.length; i++) {
   testScore[i] = Integer.parseInt(JOptionPane.showInputDialog("Please input mark for test " + i + ": "));
}

Arrays.sort(testScore);
System.out.println("Average: " + (testScore[1] + testScore[2])/2.0);