Programs & Examples On #Amazon web services

ONLY QUESTIONS ABOUT PROGRAMMING ARE ON-TOPIC. General server help can be obtained at https://serverfault.com Amazon Web Services (AWS) is a collection of commercial services centered around virtualized tools.

How to check if a specified key exists in a given S3 bucket using Java

Use ListObjectsRequest setting Prefix as your key.

.NET code:

 public bool Exists(string key)
    {

        using (Amazon.S3.AmazonS3Client client = (Amazon.S3.AmazonS3Client)Amazon.AWSClientFactory.CreateAmazonS3Client(m_accessKey, m_accessSecret))
        {
            ListObjectsRequest request = new ListObjectsRequest();
            request.BucketName = m_bucketName;
            request.Prefix = key;
            using (ListObjectsResponse response = client.ListObjects(request))
            {

                foreach (S3Object o in response.S3Objects)
                {
                    if( o.Key == key )
                        return true;
                }
                return false;
            }
        }
    }.

What does ECU units, CPU core and memory mean when I launch a instance

ECUs (EC2 Computer Units) are a rough measure of processor performance that was introduced by Amazon to let you compare their EC2 instances ("servers").

CPU performance is of course a multi-dimensional measure, so putting a single number on it (like "5 ECU") can only be a rough approximation. If you want to know more exactly how well a processor performs for a task you have in mind, you should choose a benchmark that is similar to your task.

In early 2014, there was a nice benchmarking site comparing cloud hosting offers by tens of different benchmarks, over at CloudHarmony benchmarks. However, this seems gone now (and archive.org can't help as it was a web application). Only an introductory blog post is still available.

Also useful: ec2instances.info, which at least aggregates the ECU information of different EC2 instances for comparison. (Add column "Compute Units (ECU)" to make it work.)

DynamoDB vs MongoDB NoSQL

Bear in mind, I've only experimented with MongoDB...

From what I've read, DynamoDB has come a long way in terms of features. It used to be a super-basic key-value store with extremely limited storage and querying capabilities. It has since grown, now supporting bigger document sizes + JSON support and global secondary indices. The gap between what DynamoDB and MongoDB offers in terms of features grows smaller with every month. The new features of DynamoDB are expanded on here.

Much of the MongoDB vs. DynamoDB comparisons are out of date due to the recent addition of DynamoDB features. However, this post offers some other convincing points to choose DynamoDB, namely that it's simple, low maintenance, and often low cost. Another discussion here of database choices was interesting to read, though slightly old.

My takeaway: if you're doing serious database queries or working in languages not supported by DynamoDB, use MongoDB. Otherwise, stick with DynamoDB.

Difference between Amazon EC2 and AWS Elastic Beanstalk

First off, EC2 and Elastic Compute Cloud are the same thing.

Next, AWS encompasses the range of Web Services that includes EC2 and Elastic Beanstalk. It also includes many others such as S3, RDS, DynamoDB, and all the others.

EC2

EC2 is Amazon's service that allows you to create a server (AWS calls these instances) in the AWS cloud. You pay by the hour and only what you use. You can do whatever you want with this instance as well as launch n number of instances.

Elastic Beanstalk

Elastic Beanstalk is one layer of abstraction away from the EC2 layer. Elastic Beanstalk will setup an "environment" for you that can contain a number of EC2 instances, an optional database, as well as a few other AWS components such as a Elastic Load Balancer, Auto-Scaling Group, Security Group. Then Elastic Beanstalk will manage these items for you whenever you want to update your software running in AWS. Elastic Beanstalk doesn't add any cost on top of these resources that it creates for you. If you have 10 hours of EC2 usage, then all you pay is 10 compute hours.

Running Wordpress

For running Wordpress, it is whatever you are most comfortable with. You could run it straight on a single EC2 instance, you could use a solution from the AWS Marketplace, or you could use Elastic Beanstalk.

What to pick?

In the case that you want to reduce system operations and just focus on the website, then Elastic Beanstalk would be the best choice for that. Elastic Beanstalk supports a PHP stack (as well as others). You can keep your site in version control and easily deploy to your environment whenever you make changes. It will also setup an Autoscaling group which can spawn up more EC2 instances if traffic is growing.

Here's the first result off of Google when searching for "elastic beanstalk wordpress": https://www.otreva.com/blog/deploying-wordpress-amazon-web-services-aws-ec2-rds-via-elasticbeanstalk/

Do you get charged for a 'stopped' instance on EC2?

Short answer - no.

You will only be charged for the time that your instance is up and running, in hour increments. If you are using other services in conjunction you may be charged for those but it would be separate from your server instance.

How can I tell how many objects I've stored in an S3 bucket?

In s3cmd, simply run the following command (on a Ubuntu system):

s3cmd ls -r s3://mybucket | wc -l

Listing files in a specific "folder" of a AWS S3 bucket

you can check the type. s3 has a special application/x-directory

bucket.objects({:delimiter=>"/", :prefix=>"f1/"}).each { |obj| p obj.object.content_type }

How to specify credentials when connecting to boto3 S3?

There are numerous ways to store credentials while still using boto3.resource(). I'm using the AWS CLI method myself. It works perfectly.

https://boto3.amazonaws.com/v1/documentation/api/latest/guide/configuration.html?fbclid=IwAR2LlrS4O2gYH6xAF4QDVIH2Q2tzfF_VZ6loM3XfXsPAOR4qA-pX_qAILys

Listing contents of a bucket with boto3

This is similar to an 'ls' but it does not take into account the prefix folder convention and will list the objects in the bucket. It's left up to the reader to filter out prefixes which are part of the Key name.

In Python 2:

from boto.s3.connection import S3Connection

conn = S3Connection() # assumes boto.cfg setup
bucket = conn.get_bucket('bucket_name')
for obj in bucket.get_all_keys():
    print(obj.key)

In Python 3:

from boto3 import client

conn = client('s3')  # again assumes boto.cfg setup, assume AWS S3
for key in conn.list_objects(Bucket='bucket_name')['Contents']:
    print(key['Key'])

Amazon S3 direct file upload from client browser - private key disclosure

If you are willing to use a 3rd party service, auth0.com supports this integration. The auth0 service exchanges a 3rd party SSO service authentication for an AWS temporary session token will limited permissions.

See: https://github.com/auth0-samples/auth0-s3-sample/
and the auth0 documentation.

Extension exists but uuid_generate_v4 fails

Looks like the extension is not installed in the particular database you require it.

You should connect to this particular database with

 \CONNECT my_database

Then install the extension in this database

 CREATE EXTENSION "uuid-ossp";

Read file content from S3 bucket with boto3

When you want to read a file with a different configuration than the default one, feel free to use either mpu.aws.s3_read(s3path) directly or the copy-pasted code:

def s3_read(source, profile_name=None):
    """
    Read a file from an S3 source.

    Parameters
    ----------
    source : str
        Path starting with s3://, e.g. 's3://bucket-name/key/foo.bar'
    profile_name : str, optional
        AWS profile

    Returns
    -------
    content : bytes

    botocore.exceptions.NoCredentialsError
        Botocore is not able to find your credentials. Either specify
        profile_name or add the environment variables AWS_ACCESS_KEY_ID,
        AWS_SECRET_ACCESS_KEY and AWS_SESSION_TOKEN.
        See https://boto3.readthedocs.io/en/latest/guide/configuration.html
    """
    session = boto3.Session(profile_name=profile_name)
    s3 = session.client('s3')
    bucket_name, key = mpu.aws._s3_path_split(source)
    s3_object = s3.get_object(Bucket=bucket_name, Key=key)
    body = s3_object['Body']
    return body.read()

Font from origin has been blocked from loading by Cross-Origin Resource Sharing policy

The only thing that has worked for me (probably because I had inconsistencies with www. usage):

Paste this in to your .htaccess file:

<IfModule mod_headers.c>
<FilesMatch "\.(eot|font.css|otf|ttc|ttf|woff)$">
    Header set Access-Control-Allow-Origin "*"
</FilesMatch>
</IfModule>
<IfModule mod_mime.c>
# Web fonts
AddType application/font-woff woff
AddType application/vnd.ms-fontobject eot

# Browsers usually ignore the font MIME types and sniff the content,
# however, Chrome shows a warning if other MIME types are used for the
# following fonts.
AddType application/x-font-ttf ttc ttf
AddType font/opentype otf

# Make SVGZ fonts work on iPad:
# https://twitter.com/FontSquirrel/status/14855840545
AddType     image/svg+xml svg svgz
AddEncoding gzip svgz

</IfModule>

# rewrite www.example.com ? example.com

<IfModule mod_rewrite.c>
RewriteCond %{HTTPS} !=on
RewriteCond %{HTTP_HOST} ^www\.(.+)$ [NC]
RewriteRule ^ http://%1%{REQUEST_URI} [R=301,L]
</IfModule>

http://ce3wiki.theturninggate.net/doku.php?id=cross-domain_issues_broken_web_fonts

Connect to Amazon EC2 file directory using Filezilla and SFTP

all you have to do is: 1. open site manager on filezilla 2. add new site 3. give host address and port if port is not default port 4. communnication type: SFTP 5. session type key file 6. put username 7. choose key file directory but beware on windows file explorer looks for ppk file as default choose all files on dropdown then choose your pem file and you are good to go.

since you add new site and configured next time when you want to connect just choose your saved site and connect. That is it.

Access denied; you need (at least one of) the SUPER privilege(s) for this operation

For importing database file in .sql.gz format, remove definer and import using below command

zcat path_to_db_to_import.sql.gz | sed -e 's/DEFINER[ ]*=[ ]*[^*]*\*/\*/' | mysql -u user -p new_db_name
  1. Earlier, export database in .sql.gz format using below command.

    mysqldump -u user -p old_db | gzip -9 > path_to_db_exported.sql.gz;

  2. Import that exported database and removing definer using below command,

    zcat path_to_db_exported.sql.gz | sed -e 's/DEFINER[ ]*=[ ]*[^*]*\*/\*/' | mysql -u user -p new_db

Boto3 to download all files from a S3 Bucket

I have been running into this problem for a while and with all of the different forums I've been through I haven't see a full end-to-end snip-it of what works. So, I went ahead and took all the pieces (add some stuff on my own) and have created a full end-to-end S3 Downloader!

This will not only download files automatically but if the S3 files are in subdirectories, it will create them on the local storage. In my application's instance, I need to set permissions and owners so I have added that too (can be comment out if not needed).

This has been tested and works in a Docker environment (K8) but I have added the environmental variables in the script just in case you want to test/run it locally.

I hope this helps someone out in their quest of finding S3 Download automation. I also welcome any advice, info, etc. on how this can be better optimized if needed.

#!/usr/bin/python3
import gc
import logging
import os
import signal
import sys
import time
from datetime import datetime

import boto
from boto.exception import S3ResponseError
from pythonjsonlogger import jsonlogger

formatter = jsonlogger.JsonFormatter('%(message)%(levelname)%(name)%(asctime)%(filename)%(lineno)%(funcName)')

json_handler_out = logging.StreamHandler()
json_handler_out.setFormatter(formatter)

#Manual Testing Variables If Needed
#os.environ["DOWNLOAD_LOCATION_PATH"] = "some_path"
#os.environ["BUCKET_NAME"] = "some_bucket"
#os.environ["AWS_ACCESS_KEY"] = "some_access_key"
#os.environ["AWS_SECRET_KEY"] = "some_secret"
#os.environ["LOG_LEVEL_SELECTOR"] = "DEBUG, INFO, or ERROR"

#Setting Log Level Test
logger = logging.getLogger('json')
logger.addHandler(json_handler_out)
logger_levels = {
    'ERROR' : logging.ERROR,
    'INFO' : logging.INFO,
    'DEBUG' : logging.DEBUG
}
logger_level_selector = os.environ["LOG_LEVEL_SELECTOR"]
logger.setLevel(logger_level_selector)

#Getting Date/Time
now = datetime.now()
logger.info("Current date and time : ")
logger.info(now.strftime("%Y-%m-%d %H:%M:%S"))

#Establishing S3 Variables and Download Location
download_location_path = os.environ["DOWNLOAD_LOCATION_PATH"]
bucket_name = os.environ["BUCKET_NAME"]
aws_access_key_id = os.environ["AWS_ACCESS_KEY"]
aws_access_secret_key = os.environ["AWS_SECRET_KEY"]
logger.debug("Bucket: %s" % bucket_name)
logger.debug("Key: %s" % aws_access_key_id)
logger.debug("Secret: %s" % aws_access_secret_key)
logger.debug("Download location path: %s" % download_location_path)

#Creating Download Directory
if not os.path.exists(download_location_path):
    logger.info("Making download directory")
    os.makedirs(download_location_path)

#Signal Hooks are fun
class GracefulKiller:
    kill_now = False
    def __init__(self):
        signal.signal(signal.SIGINT, self.exit_gracefully)
        signal.signal(signal.SIGTERM, self.exit_gracefully)
    def exit_gracefully(self, signum, frame):
        self.kill_now = True

#Downloading from S3 Bucket
def download_s3_bucket():
    conn = boto.connect_s3(aws_access_key_id, aws_access_secret_key)
    logger.debug("Connection established: ")
    bucket = conn.get_bucket(bucket_name)
    logger.debug("Bucket: %s" % str(bucket))
    bucket_list = bucket.list()
#    logger.info("Number of items to download: {0}".format(len(bucket_list)))

    for s3_item in bucket_list:
        key_string = str(s3_item.key)
        logger.debug("S3 Bucket Item to download: %s" % key_string)
        s3_path = download_location_path + "/" + key_string
        logger.debug("Downloading to: %s" % s3_path)
        local_dir = os.path.dirname(s3_path)

        if not os.path.exists(local_dir):
            logger.info("Local directory doesn't exist, creating it... %s" % local_dir)
            os.makedirs(local_dir)
            logger.info("Updating local directory permissions to %s" % local_dir)
#Comment or Uncomment Permissions based on Local Usage
            os.chmod(local_dir, 0o775)
            os.chown(local_dir, 60001, 60001)
        logger.debug("Local directory for download: %s" % local_dir)
        try:
            logger.info("Downloading File: %s" % key_string)
            s3_item.get_contents_to_filename(s3_path)
            logger.info("Successfully downloaded File: %s" % s3_path)
            #Updating Permissions
            logger.info("Updating Permissions for %s" % str(s3_path))
#Comment or Uncomment Permissions based on Local Usage
            os.chmod(s3_path, 0o664)
            os.chown(s3_path, 60001, 60001)
        except (OSError, S3ResponseError) as e:
            logger.error("Fatal error in s3_item.get_contents_to_filename", exc_info=True)
            # logger.error("Exception in file download from S3: {}".format(e))
            continue
        logger.info("Deleting %s from S3 Bucket" % str(s3_item.key))
        s3_item.delete()

def main():
    killer = GracefulKiller()
    while not killer.kill_now:
        logger.info("Checking for new files on S3 to download...")
        download_s3_bucket()
        logger.info("Done checking for new files, will check in 120s...")
        gc.collect()
        sys.stdout.flush()
        time.sleep(120)
if __name__ == '__main__':
    main()

Hadoop/Hive : Loading data from .csv on a local machine

There is another way of enabling this,

  1. use hadoop hdfs -copyFromLocal to copy the .csv data file from your local computer to somewhere in HDFS, say '/path/filename'

  2. enter Hive console, run the following script to load from the file to make it as a Hive table. Note that '\054' is the ascii code of 'comma' in octal number, representing fields delimiter.


CREATE EXTERNAL TABLE table name (foo INT, bar STRING)
 COMMENT 'from csv file'
 ROW FORMAT DELIMITED FIELDS TERMINATED BY '\054'
 STORED AS TEXTFILE
 LOCATION '/path/filename';

AWS S3: The bucket you are attempting to access must be addressed using the specified endpoint

It seems likely that this bucket was created in a different region, IE not us-west-2. That's the only time I've seen "The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint."

US Standard is us-east-1

How to load npm modules in AWS Lambda?

You can now use Lambda Layers for this matters. Simply add a layer containing the package you need and it will run perfectly.

Follow this post: https://medium.com/@anjanava.biswas/nodejs-runtime-environment-with-aws-lambda-layers-f3914613e20e

Add Keypair to existing EC2 instance

This happened to me earlier (didn't have access to an EC2 instance someone else created but had access to AWS web console) and I blogged the answer: http://readystate4.com/2013/04/09/aws-gaining-ssh-access-to-an-ec2-instance-you-lost-access-to/

Basically, you can detached the EBS drive, attach it to an EC2 that you do have access to. Add your SSH pub key to ~ec2-user/.ssh/authorized_keys on this attached drive. Then put it back on the old EC2 instance. step-by-step in the link using Amazon AMI.

No need to make snapshots or create a new cloned instance.

How to Configure SSL for Amazon S3 bucket

If you really need it, consider redirections.

For example, on request to assets.my-domain.example.com/path/to/file you could perform a 301 or 302 redirection to my-bucket-name.s3.amazonaws.com/path/to/file or s3.amazonaws.com/my-bucket-name/path/to/file (please remember that in the first case my-bucket-name cannot contain any dots, otherwise it won't match *.s3.amazonaws.com, s3.amazonaws.com stated in S3 certificate).

Not tested, but I believe it would work. I see few gotchas, however.

The first one is pretty obvious, an additional request to get this redirection. And I doubt you could use redirection server provided by your domain name registrar — you'd have to upload proper certificate there somehow — so you have to use your own server for this.

The second one is that you can have urls with your domain name in page source code, but when for example user opens the pic in separate tab, then address bar will display the target url.

Possible reasons for timeout when trying to access EC2 instance

My issue - I had port 22 open for "My IP" and changed the internet connection and IP address change caused. So had to change it back.

AWS Lambda import module error in python

Your package directories in your zip must be world readable too.

To identify if this is your problem (Linux) use:

find $ZIP_SOURCE -type d -not -perm /001 -printf %M\ "%p\n"

To fix use:

find $ZIP_SOURCE  -type d -not -perm /001 -exec chmod o+x {} \;

File readable is also a requirement. To identify if this is your problem use:

find $ZIP_SOURCE -type f -not -perm /004 -printf %M\ "%p\n"

To fix use:

find $ZIP_SOURCE  -type f -not -perm /004 -exec chmod o+r {} \;

If you had this problem and you are working in Linux, check that umask is appropriately set when creating or checking out of git your python packages e.g. put this in you packaging script or .bashrc:

umask 0002 

Why is this HTTP request not working on AWS Lambda?

I had the very same problem and then I realized that programming in NodeJS is actually different than Python or Java as its based on JavaScript. I'll try to use simple concepts as there may be a few new folks that would be interested or may come to this question.

Let's look at the following code :

var http = require('http'); // (1)
exports.handler = function(event, context) {
  console.log('start request to ' + event.url)
  http.get(event.url,  // (2)
  function(res) {  //(3)
    console.log("Got response: " + res.statusCode);
    context.succeed();
  }).on('error', function(e) {
    console.log("Got error: " + e.message);
    context.done(null, 'FAILURE');
  });

  console.log('end request to ' + event.url); //(4)
}

Whenever you make a call to a method in http package (1) , it is created as event and this event gets it separate event. The 'get' function (2) is actually the starting point of this separate event.

Now, the function at (3) will be executing in a separate event, and your code will continue it executing path and will straight jump to (4) and finish it off, because there is nothing more to do.

But the event fired at (2) is still executing somewhere and it will take its own sweet time to finish. Pretty bizarre, right ?. Well, No it is not. This is how NodeJS works and its pretty important you wrap your head around this concept. This is the place where JavaScript Promises come to help.

You can read more about JavaScript Promises here. In a nutshell, you would need a JavaScript Promise to keep the execution of code inline and will not spawn new / extra threads.

Most of the common NodeJS packages have a Promised version of their API available, but there are other approaches like BlueBirdJS that address the similar problem.

The code that you had written above can be loosely re-written as follows.

'use strict';
console.log('Loading function');
var rp = require('request-promise');
exports.handler = (event, context, callback) => {    

    var options = {
    uri: 'https://httpbin.org/ip',
    method: 'POST',
    body: {

    },
    json: true 
};


    rp(options).then(function (parsedBody) {
            console.log(parsedBody);
        })
        .catch(function (err) {
            // POST failed... 
            console.log(err);
        });

    context.done(null);
};

Please note that the above code will not work directly if you will import it in AWS Lambda. For Lambda, you will need to package the modules with the code base too.

Change key pair for ec2 instance

This is for them who has two different pem file and for any security purpose want to discard one of the two. Let's say we want to discard 1.pem

  1. Connect with server 2 and copy ssh key from ~/.ssh/authorized_keys
  2. Connect with server 1 in another terminal and paste the key in ~/.ssh/authorized_keys. You will have now two public ssh key here
  3. Now, just for your confidence, try to connect with server 1 with 2.pem. You will be able to connect server 1 with both 1.pem and 2.pem
  4. Now, comment the 1.pem ssh and connect using ssh -i 2.pem user@server1

AWS S3 - How to fix 'The request signature we calculated does not match the signature' error?

Just to add to the many different ways this can show up.

If you using safari on iOS and you are connected to the Safari Technology Preview console - you will see the same problem. If you disconnect from the console - the problem will go away.

Of course it makes troubleshooting other issues difficult but it is a 100% repro.

I am trying to figure out what I can change in STP to stop it from doing this but have not found it yet.

Downloading an entire S3 bucket?

If you use Visual Studio, download "AWS Toolkit for Visual Studio".

After installed, go to Visual Studio - AWS Explorer - S3 - Your bucket - Double click

In the window you will be able to select all files. Right click and download files.

How to fix apt-get: command not found on AWS EC2?

Check with "uname -a" and/or "lsb_release -a" to see which version of Linux you are actually running on your AWS instance. The default Amazon AMI image uses YUM for its package manager.

Amazon S3 - HTTPS/SSL - Is it possible?

As previously stated, it's not directly possible, but you can set up Apache or nginx + SSL on a EC2 instance, CNAME your desired domain to that, and reverse-proxy to the (non-custom domain) S3 URLs.

Can't push image to Amazon ECR - fails with "no basic auth credentials"

if you run $(aws ecr get-login --region us-east-1) it will be all done for you

How to upload a file to directory in S3 bucket using boto

Using boto3

import logging
import boto3
from botocore.exceptions import ClientError


def upload_file(file_name, bucket, object_name=None):
    """Upload a file to an S3 bucket

    :param file_name: File to upload
    :param bucket: Bucket to upload to
    :param object_name: S3 object name. If not specified then file_name is used
    :return: True if file was uploaded, else False
    """

    # If S3 object_name was not specified, use file_name
    if object_name is None:
        object_name = file_name

    # Upload the file
    s3_client = boto3.client('s3')
    try:
        response = s3_client.upload_file(file_name, bucket, object_name)
    except ClientError as e:
        logging.error(e)
        return False
    return True

For more:- https://boto3.amazonaws.com/v1/documentation/api/latest/guide/s3-uploading-files.html

Why is php not running?

One big gotcha is that PHP is disabled in user home directories by default, so if you are testing from ~/public_html it doesn't work. Check /etc/apache2/mods-enabled/php5.conf

# Running PHP scripts in user directories is disabled by default
# 
# To re-enable PHP in user directories comment the following lines
# (from <IfModule ...> to </IfModule>.) Do NOT set it to On as it
# prevents .htaccess files from disabling it.
#<IfModule mod_userdir.c>
#    <Directory /home/*/public_html>
#        php_admin_flag engine Off
#    </Directory>
#</IfModule>

Other than that installing in Ubuntu is real easy, as all the stuff you used to have to put in httpd.conf is done automatically.

Opening port 80 EC2 Amazon web services

Some quick tips:

  1. Disable the inbuilt firewall on your Windows instances.
  2. Use the IP address rather than the DNS entry.
  3. Create a security group for tcp ports 1 to 65000 and for source 0.0.0.0/0. It's obviously not to be used for production purposes, but it will help avoid the Security Groups as a source of problems.
  4. Check that you can actually ping your server. This may also necessitate some Security Group modification.

Job for mysqld.service failed See "systemctl status mysqld.service"

the issue is with the "/etc/mysql/my.cnf". this file must be modified by other libraries that you installed. this is how it originally should look like:

# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License, version 2.0,
# as published by the Free Software Foundation.
#
# This program is also distributed with certain software (including
# but not limited to OpenSSL) that is licensed under separate terms,
# as designated in a particular file or component or in included license
# documentation.  The authors of MySQL hereby grant you an additional
# permission to link the program and your derivative works with the
# separately licensed software that they have included with MySQL.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
# GNU General Public License, version 2.0, for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301  USA

#
# The MySQL  Server configuration file.
#
# For explanations see
# http://dev.mysql.com/doc/mysql/en/server-system-variables.html

# * IMPORTANT: Additional settings that can override those from this file!
#   The files must end with '.cnf', otherwise they'll be ignored.
#

!includedir /etc/mysql/conf.d/
    
!includedir /etc/mysql/mysql.conf.d/

What is difference between Lightsail and EC2?

Lightsail VPSs are bundles of existing AWS products, offered through a significantly simplified interface. The difference is that Lightsail offers you a limited and fixed menu of options but with much greater ease of use. Other than the narrower scope of Lightsail in order to meet the requirements for simplicity and low cost, the underlying technology is the same.

The pre-defined bundles can be described:

% aws lightsail --region us-east-1 get-bundles
{
    "bundles": [
        {
            "name": "Nano",
            "power": 300,
            "price": 5.0,
            "ramSizeInGb": 0.5,
            "diskSizeInGb": 20,
            "transferPerMonthInGb": 1000,
            "cpuCount": 1,
            "instanceType": "t2.nano",
            "isActive": true,
            "bundleId": "nano_1_0"
        },
        ...
    ]
}

It's worth reading through the Amazon EC2 T2 Instances documentation, particularly the CPU Credits section which describes the base and burst performance characteristics of the underlying instances.

Importantly, since your Lightsail instances run in VPC, you still have access to the full spectrum of AWS services, e.g. S3, RDS, and so on, as you would from any EC2 instance.

Row was updated or deleted by another transaction (or unsaved-value mapping was incorrect)

First check your imports, when you use session, transaction it should be org.hibernate and remove @Transactinal annotation. and most important in Entity class if you have used @GeneratedValue(strategy=GenerationType.AUTO) or any other then at the time of model object creation/entity object creation should not create id. final conclusion is if you want pass id filed i.e PK then remove @GeneratedValue from entity class.

How to pass a querystring or route parameter to AWS Lambda from Amazon API Gateway

After reading several of these answers, I used a combination of several in Aug of 2018 to retrieve the query string params through lambda for python 3.6.

First, I went to API Gateway -> My API -> resources (on the left) -> Integration Request. Down at the bottom, select Mapping Templates then for content type enter application/json.

Next, select the Method Request Passthrough template that Amazon provides and select save and deploy your API.

Then in, lambda event['params'] is how you access all of your parameters. For query string: event['params']['querystring']

AccessDenied for ListObjects for S3 bucket when permissions are s3:*

I got the same error when using policy as below, although i have "s3:ListBucket" for s3:ListObjects operation.

{
"Version": "2012-10-17",
"Statement": [
    {
        "Action": [
            "s3:ListBucket",
            "s3:GetObject",
            "s3:GetObjectAcl"
        ],
        "Resource": [
            "arn:aws:s3:::<bucketname>/*",
            "arn:aws:s3:::*-bucket/*"
        ],
        "Effect": "Allow"
    }
  ]
 }

Then i fixed it by adding one line "arn:aws:s3:::bucketname"

{
"Version": "2012-10-17",
"Statement": [
    {
        "Action": [
            "s3:ListBucket",
            "s3:GetObject",
            "s3:GetObjectAcl"
        ],
        "Resource": [
             "arn:aws:s3:::<bucketname>",
            "arn:aws:s3:::<bucketname>/*",
            "arn:aws:s3:::*-bucket/*"
        ],
        "Effect": "Allow"
    }
 ]
}

API Gateway CORS: no 'Access-Control-Allow-Origin' header

After Change your Function or Code Follow these two steps.

First Enable CORS Then Deploy API every time.

How to write a file or data to an S3 object using boto3

You no longer have to convert the contents to binary before writing to the file in S3. The following example creates a new text file (called newfile.txt) in an S3 bucket with string contents:

import boto3

s3 = boto3.resource(
    's3',
    region_name='us-east-1',
    aws_access_key_id=KEY_ID,
    aws_secret_access_key=ACCESS_KEY
)
content="String content to write to a new S3 file"
s3.Object('my-bucket-name', 'newfile.txt').put(Body=content)

EC2 Instance Cloning

The easier way is through the web management console:

  1. go to the instance
  2. select the instance and click on instance action
  3. create image

Once you have an image you can launch another cloned instance, data and all. :)

How to safely upgrade an Amazon EC2 instance from t1.micro to large?

Create AMI -> Boot AMI on large instance.

More info http://docs.amazonwebservices.com/AmazonEC2/gsg/2006-06-26/creating-an-image.html

You can do this all from the admin console too at aws.amazon.com

How to find my php-fpm.sock?

I know this is old questions but since I too have the same problem just now and found out the answer, thought I might share it. The problem was due to configuration at pood.d/ directory.

Open

/etc/php5/fpm/pool.d/www.conf

find

listen = 127.0.0.1:9000

change to

listen = /var/run/php5-fpm.sock

Restart both nginx and php5-fpm service afterwards and check if php5-fpm.sock already created.

How can I resolve the error "The security token included in the request is invalid" when running aws iam upload-server-certificate?

This can also happen when you disabled MFA. There will be an old long term entry in the AWS credentials.

Edit the file manually with editor of choice, here using vi (please backup before):

vi ~/.aws/credentials

Then remove the [default-long-term] section. As result in a minimal setup there should be one section [default] left with the actual credentials.

[default-long-term]
aws_access_key_id = ...
aws_secret_access_key = ...
aws_mfa_device = ...

How to handle errors with boto3?

Or a comparison on the class name e.g.

except ClientError as e:
    if 'EntityAlreadyExistsException' == e.__class__.__name__:
        # handle specific error

Because they are dynamically created you can never import the class and catch it using real Python.

Downloading folders from aws s3, cp or sync?

sync method first lists both source and destination paths and copies only differences (name, size etc.).

cp --recursive method lists source path and copies (overwrites) all to the destination path.

If you have possible matches in the destination path, I would suggest sync as one LIST request on the destination path will save you many unnecessary PUT requests - meaning cheaper and possibly faster.

AWS CLI S3 A client error (403) occurred when calling the HeadObject operation: Forbidden

Trying to solve this problem myself, I discovered that there is no HeadBucket permission. It looks like there is, because that's what the error message tells you, but actually the HEAD operation requires the ListBucket permission. I also discovered that my IAM policy and my bucket policy were conflicting. Make sure you check both.

How to delete files recursively from an S3 bucket

Best way is to use lifecycle rule to delete whole bucket contents. Programmatically you can use following code (PHP) to PUT lifecycle rule.

$expiration = array('Date' => date('U', strtotime('GMT midnight')));
$result = $s3->putBucketLifecycle(array(
            'Bucket' => 'bucket-name',
            'Rules' => array(
                array(
                    'Expiration' => $expiration,
                    'ID' => 'rule-name',
                    'Prefix' => '',
                    'Status' => 'Enabled',
                ),
            ),
        ));

In above case all the objects will be deleted starting Date - "Today GMT midnight".

You can also specify Days as follows. But with Days it will wait for at least 24 hrs (1 day is minimum) to start deleting the bucket contents.

$expiration = array('Days' => 1);

Is there a way to list all resources in AWS

I know it is old question but I would like to help too.

Actually, we have AWS Config, which help us to search for all resources in our cloud. You can perform SQL queries too.

I really encourage you all to know this awesome service.

How to yum install Node.JS on Amazon Linux

You can update/install the node by reinstalling the installed package to the current version which may save us from lotta of errors, while doing the update.

This is done by nvm with the below command. Here, I have updated my node version to 8 and reinstalled all the available packages to v8 too!

nvm i v8 --reinstall-packages-from=default

It works on AWS Linux instance as well.

Why do people use Heroku when AWS is present? What distinguishes Heroku from AWS?

Well! I observer Heroku is famous in budding and newly born developers while AWS has advanced developer persona. DigitalOcean is also a major player in this ground. Cloudways has made it much easy to create Lamp stack in a click on DigitalOcean and AWS. Having all services and packages updates in a click is far better than doing all thing manually.

You can check out completely here: https://www.cloudways.com/blog/host-php-on-aws-cloud/

AWS - Disconnected : No supported authentication methods available (server sent :publickey)

For me, I just had to tell FileZilla where the private keys were:

  1. Select Edit > Settings from the main menu
  2. In the Settings dialog box, go to Connection > SFTP
  3. Click the "Add key file..." button
  4. Navigate to and then select the desired PEM file(s)

Make a bucket public in Amazon S3

Amazon provides a policy generator tool:

https://awspolicygen.s3.amazonaws.com/policygen.html

After that, you can enter the policy requirements for the bucket on the AWS console:

https://console.aws.amazon.com/s3/home

Show tables, describe tables equivalent in redshift

Tomasz Tybulewicz answer is good way to go.

SELECT * FROM pg_table_def WHERE tablename = 'YOUR_TABLE_NAME' AND schemaname = 'YOUR_SCHEMA_NAME';

If schema name is not defined in search path , that query will show empty result. Please first check search path by below code.

SHOW SEARCH_PATH

If schema name is not defined in search path , you can reset search path.

SET SEARCH_PATH to '$user', public, YOUR_SCEHMA_NAME

Query EC2 tags from within instance

You can alternatively use the describe-instances cli call rather than describe-tags:

This example shows how to get the value of tag 'my-tag-name' for the instance:

aws ec2 describe-instances \
  --instance-id $(curl -s http://169.254.169.254/latest/meta-data/instance-id) \
  --query "Reservations[*].Instances[*].Tags[?Key=='my-tag-name'].Value" \
  --region ap-southeast-2 --output text

Change the region to suit your local circumstances. This may be useful where your instance has the describe-instances privilege but not describe-tags in the instance profile policy

EC2 instance has no public DNS

For those using CloudFormation, the key properties are EnableDnsSupport and EnableDnsHostnames which should be set to true

VPC: {
    Type: 'AWS::EC2::VPC',
    Properties: {
      CidrBlock: '10.0.0.0/16',
      EnableDnsSupport: true,
      EnableDnsHostnames: true,
      InstanceTenancy: 'default',
      Tags: [
        {
          Key: 'env',
          Value: 'dev'
        }]
    }
  }

How do you search an amazon s3 bucket?

Fast forward to 2020, and using aws-okta as our 2fa, the following command, while slow as hell to iterate through all of the objects and folders in this particular bucket (+270,000) worked fine.

aws-okta exec dev -- aws s3 ls my-cool-bucket --recursive | grep needle-in-haystax.txt

Benefits of EBS vs. instance-store (and vice-versa)

Most people choose to use EBS backed instance as it is stateful. It is to safer because everything you have running and installed inside it, will survive stop/stop or any instance failure.

Instance store is stateless, you loose it with all the data inside in case of any instance failure situation. However, it is free and faster because the instance volume is tied to the physical server where the VM is running.

How do I install Python 3 on an AWS EC2 instance?

If you do a

sudo yum list | grep python3

you will see that while they don't have a "python3" package, they do have a "python34" package, or a more recent release, such as "python36". Installing it is as easy as:

sudo yum install python34 python34-pip

Amazon S3 exception: "The specified key does not exist"

Step 1: Get the latest aws-java-sdk

<!-- https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-aws -->
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-java-sdk</artifactId>
<version>1.11.660</version>
</dependency>

Step 2: The correct imports

import com.amazonaws.auth.AWSCredentials;
import com.amazonaws.auth.BasicAWSCredentials;
import com.amazonaws.regions.Region;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3Client;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.ListObjectsRequest;
import com.amazonaws.services.s3.model.ObjectListing;

If you are sure the bucket exists, Specified key does not exists error would mean the bucketname is not spelled correctly ( contains slash or special characters). Refer the documentation for naming convention.

The document quotes:

If the requested object is available in the bucket and users are still getting the 404 NoSuchKey error from Amazon S3, check the following:

Confirm that the request matches the object name exactly, including the capitalization of the object name. Requests for S3 objects are case sensitive. For example, if an object is named myimage.jpg, but Myimage.jpg is requested, then requester receives a 404 NoSuchKey error. Confirm that the requested path matches the path to the object. For example, if the path to an object is awsexamplebucket/Downloads/February/Images/image.jpg, but the requested path is awsexamplebucket/Downloads/February/image.jpg, then the requester receives a 404 NoSuchKey error. If the path to the object contains any spaces, be sure that the request uses the correct syntax to recognize the path. For example, if you're using the AWS CLI to download an object to your Windows machine, you must use quotation marks around the object path, similar to: aws s3 cp "s3://awsexamplebucket/Backup Copy Job 4/3T000000.vbk". Optionally, you can enable server access logging to review request records in further detail for issues that might be causing the 404 error.

AWSCredentials credentials = new BasicAWSCredentials(AWS_ACCESS_KEY_ID, AWS_SECRET_KEY);
AmazonS3 s3Client = AmazonS3ClientBuilder.standard().withRegion(Regions.US_EAST_1).build();
ObjectListing objects = s3Client.listObjects("bigdataanalytics");
System.out.println(objects.getObjectSummaries());

Permission denied (publickey) when SSH Access to Amazon EC2 instance

same thing happened to me, but all that was happening is that the private key got lost from the keychain on my local machine.

ssh-add -K

re-added the key, then the ssh command to connect returned to work.

Can an AWS Lambda function call another

There are a lot of answers, but none is stressing that calling another lambda function is not recommended solution for synchronous calls and the one that you should be using is really Step Functions

Reasons why it is not recommended solution:

  • you are paying for both functions
  • your code is responsible for retrials

You can also use it for quite complex logic, such as parallel steps and catch failures. Every execution is also being logged out which makes debugging much simpler.

Amazon products API - Looking for basic overview and information

Straight from the horse's moutyh: Summary of Product Advertising API Operations which has the following categories:

  • Find Items
  • Find Out More About Specific Items
  • Shopping Cart
  • Customer Content
  • Seller Information
  • Other Operations

How to rename files and folder in Amazon S3?

As answered by Naaz direct renaming of s3 is not possible.

i have attached a code snippet which will copy all the contents

code is working just add your aws access key and secret key

here's what i did in code

-> copy the source folder contents(nested child and folders) and pasted in the destination folder

-> when the copying is complete, delete the source folder

package com.bighalf.doc.amazon;

import java.io.ByteArrayInputStream;
import java.io.InputStream;
import java.util.List;

import com.amazonaws.auth.AWSCredentials;
import com.amazonaws.auth.BasicAWSCredentials;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3Client;
import com.amazonaws.services.s3.model.CopyObjectRequest;
import com.amazonaws.services.s3.model.ObjectMetadata;
import com.amazonaws.services.s3.model.PutObjectRequest;
import com.amazonaws.services.s3.model.S3ObjectSummary;

public class Test {

public static boolean renameAwsFolder(String bucketName,String keyName,String newName) {
    boolean result = false;
    try {
        AmazonS3 s3client = getAmazonS3ClientObject();
        List<S3ObjectSummary> fileList = s3client.listObjects(bucketName, keyName).getObjectSummaries();
        //some meta data to create empty folders start
        ObjectMetadata metadata = new ObjectMetadata();
        metadata.setContentLength(0);
        InputStream emptyContent = new ByteArrayInputStream(new byte[0]);
        //some meta data to create empty folders end

        //final location is the locaiton where the child folder contents of the existing folder should go
        String finalLocation = keyName.substring(0,keyName.lastIndexOf('/')+1)+newName;
        for (S3ObjectSummary file : fileList) {
            String key = file.getKey();
            //updating child folder location with the newlocation
            String destinationKeyName = key.replace(keyName,finalLocation);
            if(key.charAt(key.length()-1)=='/'){
                //if name ends with suffix (/) means its a folders
                PutObjectRequest putObjectRequest = new PutObjectRequest(bucketName, destinationKeyName, emptyContent, metadata);
                s3client.putObject(putObjectRequest);
            }else{
                //if name doesnot ends with suffix (/) means its a file
                CopyObjectRequest copyObjRequest = new CopyObjectRequest(bucketName, 
                        file.getKey(), bucketName, destinationKeyName);
                s3client.copyObject(copyObjRequest);
            }
        }
        boolean isFodlerDeleted = deleteFolderFromAws(bucketName, keyName);
        return isFodlerDeleted;
    } catch (Exception e) {
        e.printStackTrace();
    }
    return result;
}

public static boolean deleteFolderFromAws(String bucketName, String keyName) {
    boolean result = false;
    try {
        AmazonS3 s3client = getAmazonS3ClientObject();
        //deleting folder children
        List<S3ObjectSummary> fileList = s3client.listObjects(bucketName, keyName).getObjectSummaries();
        for (S3ObjectSummary file : fileList) {
            s3client.deleteObject(bucketName, file.getKey());
        }
        //deleting actual passed folder
        s3client.deleteObject(bucketName, keyName);
        result = true;
    } catch (Exception e) {
        e.printStackTrace();
    }
    return result;
}

public static void main(String[] args) {
    intializeAmazonObjects();
    boolean result = renameAwsFolder(bucketName, keyName, newName);
    System.out.println(result);
}

private static AWSCredentials credentials = null;
private static AmazonS3 amazonS3Client = null;
private static final String ACCESS_KEY = "";
private static final String SECRET_ACCESS_KEY = "";
private static final String bucketName = "";
private static final String keyName = "";
//renaming folder c to x from key name
private static final String newName = "";

public static void intializeAmazonObjects() {
    credentials = new BasicAWSCredentials(ACCESS_KEY, SECRET_ACCESS_KEY);
    amazonS3Client = new AmazonS3Client(credentials);
}

public static AmazonS3 getAmazonS3ClientObject() {
    return amazonS3Client;
}

}

Amazon S3 upload file and get URL

@hussachai and @Jeffrey Kemp answers are pretty good. But they have something in common is the url returned is of virtual-host-style, not in path style. For more info regarding to the s3 url style, can refer to AWS S3 URL Styles. In case of some people want to have path style s3 url generated. Here's the step. Basically everything will be the same as @hussachai and @Jeffrey Kemp answers, only with one line setting change as below:

AmazonS3Client s3Client = (AmazonS3Client) AmazonS3ClientBuilder.standard()
                    .withRegion("us-west-2")
                    .withCredentials(DefaultAWSCredentialsProviderChain.getInstance())
                    .withPathStyleAccessEnabled(true)
                    .build();

// Upload a file as a new object with ContentType and title specified.
PutObjectRequest request = new PutObjectRequest(bucketName, stringObjKeyName, fileToUpload);
s3Client.putObject(request);
URL s3Url = s3Client.getUrl(bucketName, stringObjKeyName);
logger.info("S3 url is " + s3Url.toExternalForm());

This will generate url like: https://s3.us-west-2.amazonaws.com/mybucket/myfilename

AWS EFS vs EBS vs S3 (differences & when to use?)

AWS EFS, EBS and S3. From Functional Standpoint, here is the difference

EFS:

  1. Network filesystem :can be shared across several Servers; even between regions. The same is not available for EBS case. This can be used esp for storing the ETL programs without the risk of security

  2. Highly available, scalable service.

  3. Running any application that has a high workload, requires scalable storage, and must produce output quickly.

  4. It can provide higher throughput. It match sudden file system growth, even for workloads up to 500,000 IOPS or 10 GB per second.

  5. Lift-and-shift application support: EFS is elastic, available, and scalable, and enables you to move enterprise applications easily and quickly without needing to re-architect them.

  6. Analytics for big data: It has the ability to run big data applications, which demand significant node throughput, low-latency file access, and read-after-write operations.

EBS:

  1. for NoSQL databases, EBS offers NoSQL databases the low-latency performance and dependability they need for peak performance.

S3:

Robust performance, scalability, and availability: Amazon S3 scales storage resources free from resource procurement cycles or investments upfront.

2)Data lake and big data analytics: Create a data lake to hold raw data in its native format, then using machine learning tools, analytics to draw insights.

  1. Backup and restoration: Secure, robust backup and restoration solutions
  2. Data archiving
  3. S3 is an object store good at storing vast numbers of backups or user files. Unlike EBS or EFS, S3 is not limited to EC2. Files stored within an S3 bucket can be accessed programmatically or directly from services such as AWS CloudFront. Many websites use it to hold their content and media files, which may be served efficiently via AWS CloudFront.

AWS : The config profile (MyName) could not be found

Make sure you are in the correct VirtualEnvironment. I updated PyCharm and for some reason had to point my project at my VE again. Opening the terminal, I was not in my VE when attempting zappa update (and got this error). Restarting PyCharm, all back to normal.

SSH to Elastic Beanstalk instance

You need to connect to the ec2 instance directly using its public ip address. You can not connect using the elasticbeanstalk url.

You can find the instance ip address by looking it up in the ec2 console.

You also need to make sure port 22 is open. By default the EB CLI closes port 22 after a ssh connection is complete. You can call eb ssh -o to keep the port open after the ssh session is complete.

Warning: You should know that elastic beanstalk could replace your instance at anytime. State is not guaranteed on any of your elastic beanstalk instances. Its probably better to use ssh for testing and debugging purposes only, as anything you modify can go away at any time.

How to choose an AWS profile when using boto3 to connect to CloudFront

I think the docs aren't wonderful at exposing how to do this. It has been a supported feature for some time, however, and there are some details in this pull request.

So there are three different ways to do this:

Option A) Create a new session with the profile

    dev = boto3.session.Session(profile_name='dev')

Option B) Change the profile of the default session in code

    boto3.setup_default_session(profile_name='dev')

Option C) Change the profile of the default session with an environment variable

    $ AWS_PROFILE=dev ipython
    >>> import boto3
    >>> s3dev = boto3.resource('s3')

Read file from aws s3 bucket using node fs

This will do it:

new AWS.S3().getObject({ Bucket: this.awsBucketName, Key: keyName }, function(err, data)
{
    if (!err)
        console.log(data.Body.toString());
});

S3 limit to objects in a bucket

It looks like the limit has changed. You can store 5TB for a single object.

The total volume of data and number of objects you can store are unlimited. Individual Amazon S3 objects can range in size from a minimum of 0 bytes to a maximum of 5 terabytes. The largest object that can be uploaded in a single PUT is 5 gigabytes. For objects larger than 100 megabytes, customers should consider using the Multipart Upload capability.

http://aws.amazon.com/s3/faqs/#How_much_data_can_I_store

AWS ssh access 'Permission denied (publickey)' issue

For my ubuntu images, it is actually ubuntu user and NOT the ec2-user ;)

AWS S3: how do I see how much disk space is using

In addition to Christopher's answer.

If you need to count total size of versioned bucket use:

aws s3api list-object-versions --bucket BUCKETNAME --output json --query "[sum(Versions[].Size)]"

It counts both Latest and Archived versions.

Setting up FTP on Amazon Cloud Server

In case you have ufw enabled, remember add ftp:

> sudo ufw allow ftp

It took me 2 days to realise that I enabled ufw.

boto3 client NoRegionError: You must specify a region error only sometimes

For Python 2 I have found that the boto3 library does not source the region from the ~/.aws/config if the region is defined in a different profile to default. So you have to define it in the session creation.

session = boto3.Session(
    profile_name='NotDefault',
    region_name='ap-southeast-2'
)

print(session.available_profiles)

client = session.client(
    'ec2'
)

Where my ~/.aws/config file looks like this:

[default]
region=ap-southeast-2

[NotDefault]
region=ap-southeast-2

I do this because I use different profiles for different logins to AWS, Personal and Work.

What is the recommended way to delete a large number of items from DynamoDB?

My approach to delete all rows from a table i DynamoDb is just to pull all rows out from the table, using DynamoDbs ScanAsync and then feed the result list to DynamoDbs AddDeleteItems. Below code in C# works fine for me.

        public async Task DeleteAllReadModelEntitiesInTable()
    {
        List<ReadModelEntity> readModels;

        var conditions = new List<ScanCondition>();
        readModels = await _context.ScanAsync<ReadModelEntity>(conditions).GetRemainingAsync();

        var batchWork = _context.CreateBatchWrite<ReadModelEntity>();
        batchWork.AddDeleteItems(readModels);
        await batchWork.ExecuteAsync();
    }

Note: Deleting the table and then recreating it again from the web console may cause problems if using YAML/CloudFormation to create the table.

AWS S3 CLI - Could not connect to the endpoint URL

In case it is not working in your default region, try providing a region close to you. This worked for me:

   PS C:\Users\shrig> aws  configure
   AWS Access Key ID [****************C]:**strong text**
   AWS Secret Access Key [****************WD]:
   Default region name [us-east1]: ap-south-1
   Default output format [text]:

The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256

Try this combination.

const s3 = new AWS.S3({
  endpoint: 's3-ap-south-1.amazonaws.com',       // Bucket region
  accessKeyId: 'A-----------------U',
  secretAccessKey: 'k------ja----------------soGp',
  Bucket: 'bucket_name',
  useAccelerateEndpoint: true,
  signatureVersion: 'v4',
  region: 'ap-south-1'             // Bucket region
});

What data is stored in Ephemeral Storage of Amazon EC2 instance?

Basically, root volume (your entire virtual system disk) is ephemeral, but only if you choose to create AMI backed by Amazon EC2 instance store.

If you choose to create AMI backed by EBS then your root volume is backed by EBS and everything you have on your root volume will be saved between reboots.

If you are not sure what type of volume you have, look under EC2->Elastic Block Store->Volumes in your AWS console and if your AMI root volume is listed there then you are safe. Also, if you go to EC2->Instances and then look under column "Root device type" of your instance and if it says "ebs", then you don't have to worry about data on your root device.

More details here: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/RootDeviceStorage.html

How do I get AWS_ACCESS_KEY_ID for Amazon?

Amazon changes the admin console from time to time, hence the previous answers above are irrelevant in 2020.

The way to get the secret access key (Oct.2020) is:

  1. go to IAM console: https://console.aws.amazon.com/iam
  2. click on "Users". (see image) enter image description here
  3. go to the user you need his access key. enter image description here

As i see the answers above, I can assume my answer will become irrelevant in a year max :-)

HTH

How to get the instance id from within an ec2 instance?

You can just make a HTTP request to GET any Metadata by passing the your metadata parameters.

curl http://169.254.169.254/latest/meta-data/instance-id

or

wget -q -O - http://169.254.169.254/latest/meta-data/instance-id

You won't be billed for HTTP requests to get Metadata and Userdata.

Else

You can use EC2 Instance Metadata Query Tool which is a simple bash script that uses curl to query the EC2 instance Metadata from within a running EC2 instance as mentioned in documentation.

Download the tool:

$ wget http://s3.amazonaws.com/ec2metadata/ec2-metadata

now run command to get required data.

$ec2metadata -i

Refer:

http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html

https://aws.amazon.com/items/1825?externalID=1825

Happy To Help.. :)

Cannot ping AWS EC2 instance

Creation of a new security group with All ICMP worked for me.

S3 - Access-Control-Allow-Origin Header

I was having a similar problem with loading web fonts, when I clicked on 'add CORS configuration', in the bucket properties, this code was already there:

<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
    <CORSRule>
        <AllowedOrigin>*</AllowedOrigin>
        <AllowedMethod>GET</AllowedMethod>
        <AllowedMethod>HEAD</AllowedMethod>
        <MaxAgeSeconds>3000</MaxAgeSeconds>
        <AllowedHeader>Authorization</AllowedHeader>
    </CORSRule>
</CORSConfiguration> 

I just clicked save and it worked a treat, my custom web fonts were loading in IE & Firefox. I'm no expert on this, I just thought this might help you out.

Missing Authentication Token while accessing API Gateway?

For the record, if you wouldn't be using credentials, this error also shows when you are setting the request validator in your POST/PUT method to "validate body, query string parameters and HEADERS", or the other option "validate query string parameters and HEADERS"....in that case it will look for the credentials on the header and reject the request. To sum it up, if you don't intend to send credentials and want to keep it open you should not set that option in request validator(set it to either NONE or to validate body)

Trying to SSH into an Amazon Ec2 instance - permission error

It is just a permission issue with your aws pem key.

Just change the permission of pem key to 400 using below command.

chmod 400 pemkeyname.pem

If you don't have permission to change the permission of a file you can use sudo like below command.

sudo chmod 400 pemkeyname.pem

I hope this should work fine.

How To Set Up GUI On Amazon EC2 Ubuntu server

So I follow first answer, but my vnc viewer gives me grey screen when I connect to it. And I found this Ask Ubuntu link to solve that.

The only difference with previous answer is you need to install these extra packages:

apt-get install gnome-panel gnome-settings-daemon metacity nautilus gnome-terminal

And use this ~/.vnc/xstartup file:

#!/bin/sh

export XKL_XMODMAP_DISABLE=1
unset SESSION_MANAGER
unset DBUS_SESSION_BUS_ADDRESS

[ -x /etc/vnc/xstartup ] && exec /etc/vnc/xstartup
[ -r $HOME/.Xresources ] && xrdb $HOME/.Xresources
xsetroot -solid grey
vncconfig -iconic &

gnome-panel &
gnome-settings-daemon &
metacity &
nautilus &
gnome-terminal &

Everything else is the same.

Tested on EC2 Ubuntu 14.04 LTS.

Querying DynamoDB by date

Updated Answer There is no convenient way to do this using Dynamo DB Queries with predictable throughput. One (sub optimal) option is to use a GSI with an artificial HashKey & CreatedAt. Then query by HashKey alone and mention ScanIndexForward to order the results. If you can come up with a natural HashKey (say the category of the item etc) then this method is a winner. On the other hand, if you keep the same HashKey for all items, then it will affect the throughput mostly when when your data set grows beyond 10GB (one partition)

Original Answer: You can do this now in DynamoDB by using GSI. Make the "CreatedAt" field as a GSI and issue queries like (GT some_date). Store the date as a number (msecs since epoch) for this kind of queries.

Details are available here: Global Secondary Indexes - Amazon DynamoDB : http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GSI.html#GSI.Using

This is a very powerful feature. Be aware that the query is limited to (EQ | LE | LT | GE | GT | BEGINS_WITH | BETWEEN) Condition - Amazon DynamoDB : http://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_Condition.html

Error You must specify a region when running command aws ecs list-container-instances

"You must specify a region" is a not an ECS specific error, it can happen with any AWS API/CLI/SDK command.

For the CLI, either set the AWS_DEFAULT_REGION environment variable. e.g.

export AWS_DEFAULT_REGION=us-east-1

or add it into the command (you will need this every time you use a region-specific command)

AWS_DEFAULT_REGION=us-east-1 aws ecs list-container-instances --cluster default

or set it in the CLI configuration file: ~/.aws/config

[default]
region=us-east-1

or pass/override it with the CLI call:

aws ecs list-container-instances --cluster default --region us-east-1

Unable to load AWS credentials from the /AwsCredentials.properties file on the classpath

If you're wanting to use Environment variables using apache/tomcat, I found that the only way they could be found was setting them in tomcat/bin/setenv.sh (where catalina_opts are set - might be catalina.sh in your setup)

export AWS_ACCESS_KEY_ID=*********;

export AWS_SECRET_ACCESS_KEY=**************;

If you're using ubuntu, try logging in as ubuntu $printenv then log in as root $printenv, the environmental variables won't necessarily be the same....

If you only want to use environmental variables you can use: com.amazonaws.auth.EnvironmentVariableCredentialsProvider

instead of:

com.amazonaws.auth.DefaultAWSCredentialsProviderChain

(which by default checks all 4 possible locations)

anyway after hours of trying to figure out why my environmental variables weren't being found...this worked for me.

Retrieving subfolders names in S3 bucket from boto3

Here is a possible solution:

def download_list_s3_folder(my_bucket,my_folder):
    import boto3
    s3 = boto3.client('s3')
    response = s3.list_objects_v2(
        Bucket=my_bucket,
        Prefix=my_folder,
        MaxKeys=1000)
    return [item["Key"] for item in response['Contents']]

scp (secure copy) to ec2 instance without password

Just tested:

Run the following command:

sudo shred -u /etc/ssh/*_key /etc/ssh/*_key.pub

Then:

  1. create ami (image of the ec2).
  2. launch from new ami(image) from step no 2 chose new keys.

How do you add swap to an EC2 instance?

A fix for this problem is to add swap (i.e. paging) space to the instance.

Paging works by creating an area on your hard drive and using it for extra memory, this memory is much slower than normal memory however much more of it is available.

To add this extra space to your instance you type:

sudo /bin/dd if=/dev/zero of=/var/swap.1 bs=1M count=1024
sudo /sbin/mkswap /var/swap.1
sudo chmod 600 /var/swap.1
sudo /sbin/swapon /var/swap.1

If you need more than 1024 then change that to something higher.

To enable it by default after reboot, add this line to /etc/fstab:

/var/swap.1   swap    swap    defaults        0   0

.htaccess not working apache

For Ubuntu,
First, run this command :-

sudo a2enmod rewrite

Then, edit the file /etc/apache2/sites-available/000-default.conf using nano or vim using this command :-

sudo nano /etc/apache2/sites-available/000-default.conf

Then in the 000-default.conf file, add this after the line DocumentRoot /var/www/html. If your root html directory is something other, then write that :-

<Directory "/var/www/html">
  AllowOverride All
</Directory>

After doing everything, restart apache using the command sudo service apache2 restart

S3 Static Website Hosting Route All Paths to Index.html

The easiest solution to make Angular 2+ application served from Amazon S3 and direct URLs working is to specify index.html both as Index and Error documents in S3 bucket configuration.

enter image description here

WARNING: UNPROTECTED PRIVATE KEY FILE! when trying to SSH into Amazon EC2 Instance

On windows, Try using git bash and use your Linux commands there. Easy approach

chmod 400 *****.pem

ssh -i "******.pem" [email protected]

URL for public Amazon S3 bucket

The URL structure you're referring to is called the REST endpoint, as opposed to the Web Site Endpoint.


Note: Since this answer was originally written, S3 has rolled out dualstack support on REST endpoints, using new hostnames, while leaving the existing hostnames in place. This is now integrated into the information provided, below.


If your bucket is really in the us-east-1 region of AWS -- which the S3 documentation formerly referred to as the "US Standard" region, but was subsequently officially renamed to the "U.S. East (N. Virginia) Region" -- then http://s3-us-east-1.amazonaws.com/bucket/ is not the correct form for that endpoint, even though it looks like it should be. The correct format for that region is either http://s3.amazonaws.com/bucket/ or http://s3-external-1.amazonaws.com/bucket/

The format you're using is applicable to all the other S3 regions, but not US Standard US East (N. Virginia) [us-east-1].

S3 now also has dual-stack endpoint hostnames for the REST endpoints, and unlike the original endpoint hostnames, the names of these have a consistent format across regions, for example s3.dualstack.us-east-1.amazonaws.com. These endpoints support both IPv4 and IPv6 connectivity and DNS resolution, but are otherwise functionally equivalent to the existing REST endpoints.

If your permissions and configuration are set up such that the web site endpoint works, then the REST endpoint should work, too.

However... the two endpoints do not offer the same functionality.

Roughly speaking, the REST endpoint is better-suited for machine access and the web site endpoint is better suited for human access, since the web site endpoint offers friendly error messages, index documents, and redirects, while the REST endpoint doesn't. On the other hand, the REST endpoint offers HTTPS and support for signed URLs, while the web site endpoint doesn't.

Choose the correct type of endpoint (REST or web site) for your application:

http://docs.aws.amazon.com/AmazonS3/latest/dev/WebsiteEndpoints.html#WebsiteRestEndpointDiff


¹ s3-external-1.amazonaws.com has been referred to as the "Northern Virginia endpoint," in contrast to the "Global endpoint" s3.amazonaws.com. It was unofficially possible to get read-after-write consistency on new objects in this region if the "s3-external-1" hostname was used, because this would send you to a subset of possible physical endpoints that could provide that functionality. This behavior is now officially supported on this endpoint, so this is probably the better choice in many applications. Previously, s3-external-2 had been referred to as the "Pacific Northwest endpoint" for US-Standard, though it is now a CNAME in DNS for s3-external-1 so s3-external-2 appears to have no purpose except backwards-compatibility.

How to save S3 object to a file using boto3

There is a customization that went into Boto3 recently which helps with this (among other things). It is currently exposed on the low-level S3 client, and can be used like this:

s3_client = boto3.client('s3')
open('hello.txt').write('Hello, world!')

# Upload the file to S3
s3_client.upload_file('hello.txt', 'MyBucket', 'hello-remote.txt')

# Download the file from S3
s3_client.download_file('MyBucket', 'hello-remote.txt', 'hello2.txt')
print(open('hello2.txt').read())

These functions will automatically handle reading/writing files as well as doing multipart uploads in parallel for large files.

Note that s3_client.download_file won't create a directory. It can be created as pathlib.Path('/path/to/file.txt').parent.mkdir(parents=True, exist_ok=True).

Amazon AWS Filezilla transfer permission denied

if you are using centOs then use

sudo chown -R centos:centos /var/www/html

sudo chmod -R 755 /var/www/html

For Ubuntu

sudo chown -R ubuntu:ubuntu /var/www/html

sudo chmod -R 755 /var/www/html

For Amazon ami

sudo chown -R ec2-user:ec2-user /var/www/html

sudo chmod -R 755 /var/www/html

What is the difference between Amazon SNS and Amazon SQS?

AWS SNS is a publisher subscriber network, where subscribers can subscribe to topics and will receive messages whenever a publisher publishes to that topic.

AWS SQS is a queue service, which stores messages in a queue. SQS cannot deliver any messages, where an external service (lambda, EC2, etc.) is needed to poll SQS and grab messages from SQS.

SNS and SQS can be used together for multiple reasons.

  1. There may be different kinds of subscribers where some need the immediate delivery of messages, where some would require the message to persist, for later usage via polling. See this link.

  2. The "Fanout Pattern." This is for the asynchronous processing of messages. When a message is published to SNS, it can distribute it to multiple SQS queues in parallel. This can be great when loading thumbnails in an application in parallel, when images are being published. See this link.

  3. Persistent storage. When a service that is going to process a message is not reliable. In a case like this, if SNS pushes a notification to a Service, and that service is unavailable, then the notification will be lost. Therefore we can use SQS as a persistent storage and then process it afterwards.

How to undo a git pull?

Find the <SHA#> for the commit you want to go. You can find it in github or by typing git log or git reflog show at the command line and then do git reset --hard <SHA#>

Media Queries: How to target desktop, tablet, and mobile?

The best breakpoints recommended by Twitter Bootstrap

/* Custom, iPhone Retina */ 
    @media only screen and (min-width : 320px) {

    }

    /* Extra Small Devices, Phones */ 
    @media only screen and (min-width : 480px) {

    }

    /* Small Devices, Tablets */
    @media only screen and (min-width : 768px) {

    }

    /* Medium Devices, Desktops */
    @media only screen and (min-width : 992px) {

    }

    /* Large Devices, Wide Screens */
    @media only screen and (min-width : 1200px) {

    }

Eclipse not recognizing JVM 1.8

Echoing the answer, above, a full install of the JDK (8u121 at this writing) from here - http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html - did the trick. Updating via the Mac OS Control Panel did not update the profile variable. Installing via the full installer, did. Then Eclipse was happy.

How to make a submit out of a <a href...>...</a> link?

Dont forget the "BUTTON" element wich can handle some more HTML inside...

Number input type that takes only integers?

_x000D_
_x000D_
<input type="number" oninput="this.value = Math.round(this.value);"/>
_x000D_
_x000D_
_x000D_

How to extract text from a PDF?

One of the comments here used gs on Windows. I had some success with that on Linux/OSX too, with the following syntax:

gs \
 -q \
 -dNODISPLAY \
 -dSAFER \
 -dDELAYBIND \
 -dWRITESYSTEMDICT \
 -dSIMPLE \
 -f ps2ascii.ps \
 "${input}" \
 -dQUIET \
 -c quit

I used dSIMPLE instead of dCOMPLEX because the latter outputs 1 character per line.

What is this date format? 2011-08-12T20:17:46.384Z

You can use the following example.

    String date = "2011-08-12T20:17:46.384Z";

    String inputPattern = "yyyy-MM-dd'T'HH:mm:ss.SSS'Z'";

    String outputPattern = "yyyy-MM-dd HH:mm:ss";

    LocalDateTime inputDate = null;
    String outputDate = null;


    DateTimeFormatter inputFormatter = DateTimeFormatter.ofPattern(inputPattern, Locale.ENGLISH);
    DateTimeFormatter outputFormatter = DateTimeFormatter.ofPattern(outputPattern, Locale.ENGLISH);

    inputDate = LocalDateTime.parse(date, inputFormatter);
    outputDate = outputFormatter.format(inputDate);

    System.out.println("inputDate: " + inputDate);
    System.out.println("outputDate: " + outputDate);

How do I close an Android alertdialog

Use setNegative button, no Positive button required! I promise you'll win x

How can one see content of stack with GDB?

You need to use gdb's memory-display commands. The basic one is x, for examine. There's an example on the linked-to page that uses

gdb> x/4xw $sp

to print "four words (w ) of memory above the stack pointer (here, $sp) in hexadecimal (x)". The quotation is slightly paraphrased.

How to enable Auto Logon User Authentication for Google Chrome

Chrome did change their menus since this question was asked. This solution was tested with Chrome 47.0.2526.73 to 72.0.3626.109.

If you are using Chrome right now, you can check your version with : chrome://version

  1. Goto: chrome://settings

  1. Scroll down to the bottom of the page and click on "Advanced" to show more settings.

OLDER VERSIONS:

Scroll down to the bottom of the page and click on "Show advanced settings..." to show more settings.

  1. In the "System" section, click on "Open proxy settings".

OLDER VERSIONS:

In the "Network" section, click on "Change proxy settings...".

  1. Click on the "Security" tab, then select "Local intranet" icon and click on "Sites" button.

  1. Click on "Advanced" button.

  1. Insert your intranet local address and click on the "Add" button.

  1. Close all windows.

That's it.

How to make a cross-module variable?

Define a module ( call it "globalbaz" ) and have the variables defined inside it. All the modules using this "pseudoglobal" should import the "globalbaz" module, and refer to it using "globalbaz.var_name"

This works regardless of the place of the change, you can change the variable before or after the import. The imported module will use the latest value. (I tested this in a toy example)

For clarification, globalbaz.py looks just like this:

var_name = "my_useful_string"

Python: can't assign to literal

1 is a literal. name = value is an assignment. 1 = value is an assignment to a literal, which makes no sense. Why would you want 1 to mean something other than 1?

How to adjust text font size to fit textview

/* get your context */
Context c = getActivity().getApplicationContext();

LinearLayout l = new LinearLayout(c);
l.setOrientation(LinearLayout.VERTICAL);
LayoutParams params = new LayoutParams(LayoutParams.MATCH_PARENT, LayoutParams.MATCH_PARENT, 0);

l.setLayoutParams(params);
l.setBackgroundResource(R.drawable.border);

TextView tv=new TextView(c);
tv.setText(" your text here");

/* set typeface if needed */
Typeface tf = Typeface.createFromAsset(c.getAssets(),"fonts/VERDANA.TTF");  
tv.setTypeface(tf);

// LayoutParams lp = new LayoutParams();

tv.setTextColor(Color.parseColor("#282828"));

tv.setGravity(Gravity.CENTER | Gravity.BOTTOM);
//  tv.setLayoutParams(lp);

tv.setTextSize(20);
l.addView(tv);

return l;

Clicking the back button twice to exit an activity

After having had to implement the same things many times, decided its about time someone built a simple to use library. And that is DoubleBackPress Android library. The README explains all the APIs provided along with Examples (like ToastDisplay + Exit Activity), but a short rundown of the steps here.


To get going, start by adding the dependency to your application :

dependencies {
    implementation 'com.github.kaushikthedeveloper:double-back-press:0.0.1'
} 

Next, create a DoubleBackPress object in your Activity which provides the required behaviour.

DoubleBackPress doubleBackPress = new DoubleBackPress();
doubleBackPress.setDoublePressDuration(3000);           // msec

Then create a Toast that needs to be shown upon the First Back Press. Here, you can create your own Toast, or go with Standard Toast provided in the library. Doing so here by the later option.

FirstBackPressAction firstBackPressAction = new ToastDisplay().standard(this);
doubleBackPress.setFirstBackPressAction(firstBackPressAction);   // set the action

Now, define what should happen when your Second Back Press happens. Here, we are closing the Activity.

DoubleBackPressAction doubleBackPressAction = new DoubleBackPressAction() {
    @Override
    public void actionCall() {
        finish();
        System.exit(0);
    }
};

Finally, override your back press behaviour with the DoubleBackPress behaviour.

@Override
public void onBackPressed() {
    doubleBackPress.onBackPressed();
}

Example GIF of similar behavioural requirements

How to remove a variable from a PHP session array

To remove a specific variable from the session use:

session_unregister('variableName');

(see documentation) or

unset($_SESSION['variableName']);

NOTE: session_unregister() has been DEPRECATED as of PHP 5.3.0 and REMOVED as of PHP 5.4.0.

How do I debug "Error: spawn ENOENT" on node.js?

For anyone who might stumble upon this, if all the other answers do not help and you are on Windows, know that there is currently a big issue with spawn on Windows and the PATHEXT environment variable that can cause certain calls to spawn to not work depending on how the target command is installed.

Makefile to compile multiple C programs?

all: program1 program2

program1:
    gcc -Wall -ansi -pedantic -o prog1 program1.c

program2:
    gcc -Wall -ansi -pedantic -o prog2 program2.c

I rather the ansi and pedantic, a better control for your program. It wont let you compile while you still have warnings !!

In Angular, What is 'pathmatch: full' and what effect does it have?

While technically correct, the other answers would benefit from an explanation of Angular's URL-to-route matching. I don't think you can fully (pardon the pun) understand what pathMatch: full does if you don't know how the router works in the first place.


Let's first define a few basic things. We'll use this URL as an example: /users/james/articles?from=134#section.

  1. It may be obvious but let's first point out that query parameters (?from=134) and fragments (#section) do not play any role in path matching. Only the base url (/users/james/articles) matters.

  2. Angular splits URLs into segments. The segments of /users/james/articles are, of course, users, james and articles.

  3. The router configuration is a tree structure with a single root node. Each Route object is a node, which may have children nodes, which may in turn have other children or be leaf nodes.

The goal of the router is to find a router configuration branch, starting at the root node, which would match exactly all (!!!) segments of the URL. This is crucial! If Angular does not find a route configuration branch which could match the whole URL - no more and no less - it will not render anything.

E.g. if your target URL is /a/b/c but the router is only able to match either /a/b or /a/b/c/d, then there is no match and the application will not render anything.

Finally, routes with redirectTo behave slightly differently than regular routes, and it seems to me that they would be the only place where anyone would really ever want to use pathMatch: full. But we will get to this later.

Default (prefix) path matching

The reasoning behind the name prefix is that such a route configuration will check if the configured path is a prefix of the remaining URL segments. However, the router is only able to match full segments, which makes this naming slightly confusing.

Anyway, let's say this is our root-level router configuration:

const routes: Routes = [
  {
    path: 'products',
    children: [
      {
        path: ':productID',
        component: ProductComponent,
      },
    ],
  },
  {
    path: ':other',
    children: [
      {
        path: 'tricks',
        component: TricksComponent,
      },
    ],
  },
  {
    path: 'user',
    component: UsersonComponent,
  },
  {
    path: 'users',
    children: [
      {
        path: 'permissions',
        component: UsersPermissionsComponent,
      },
      {
        path: ':userID',
        children: [
          {
            path: 'comments',
            component: UserCommentsComponent,
          },
          {
            path: 'articles',
            component: UserArticlesComponent,
          },
        ],
      },
    ],
  },
];

Note that every single Route object here uses the default matching strategy, which is prefix. This strategy means that the router iterates over the whole configuration tree and tries to match it against the target URL segment by segment until the URL is fully matched. Here's how it would be done for this example:

  1. Iterate over the root array looking for a an exact match for the first URL segment - users.
  2. 'products' !== 'users', so skip that branch. Note that we are using an equality check rather than a .startsWith() or .includes() - only full segment matches count!
  3. :other matches any value, so it's a match. However, the target URL is not yet fully matched (we still need to match james and articles), thus the router looks for children.
  • The only child of :other is tricks, which is !== 'james', hence not a match.
  1. Angular then retraces back to the root array and continues from there.
  2. 'user' !== 'users, skip branch.
  3. 'users' === 'users - the segment matches. However, this is not a full match yet, thus we need to look for children (same as in step 3).
  • 'permissions' !== 'james', skip it.
  • :userID matches anything, thus we have a match for the james segment. However this is still not a full match, thus we need to look for a child which would match articles.
    1. We can see that :userID has a child route articles, which gives us a full match! Thus the application renders UserArticlesComponent.

Full URL (full) matching

Example 1

Imagine now that the users route configuration object looked like this:

{
  path: 'users',
  component: UsersComponent,
  pathMatch: 'full',
  children: [
    {
      path: 'permissions',
      component: UsersPermissionsComponent,
    },
    {
      path: ':userID',
      component: UserComponent,
      children: [
        {
          path: 'comments',
          component: UserCommentsComponent,
        },
        {
          path: 'articles',
          component: UserArticlesComponent,
        },
      ],
    },
  ],
}

Note the usage of pathMatch: full. If this were the case, steps 1-5 would be the same, however step 6 would be different:

  1. 'users' !== 'users/james/articles - the segment does not match because the path configuration users with pathMatch: full does not match the full URL, which is users/james/articles.
  2. Since there is no match, we are skipping this branch.
  3. At this point we reached the end of the router configuration without having found a match. The application renders nothing.

Example 2

What if we had this instead:

{
  path: 'users/:userID',
  component: UsersComponent,
  pathMatch: 'full',
  children: [
    {
      path: 'comments',
      component: UserCommentsComponent,
    },
    {
      path: 'articles',
      component: UserArticlesComponent,
    },
  ],
}

users/:userID with pathMatch: full matches only users/james thus it's a no-match once again, and the application renders nothing.

Example 3

Let's consider this:

{
  path: 'users',
  children: [
    {
      path: 'permissions',
      component: UsersPermissionsComponent,
    },
    {
      path: ':userID',
      component: UserComponent,
      pathMatch: 'full',
      children: [
        {
          path: 'comments',
          component: UserCommentsComponent,
        },
        {
          path: 'articles',
          component: UserArticlesComponent,
        },
      ],
    },
  ],
}

In this case:

  1. 'users' === 'users - the segment matches, but james/articles still remains unmatched. Let's look for children.
  • 'permissions' !== 'james' - skip.
  • :userID' can only match a single segment, which would be james. However, it's a pathMatch: full route, and it must match james/articles (the whole remaining URL). It's not able to do that and thus it's not a match (so we skip this branch)!
  1. Again, we failed to find any match for the URL and the application renders nothing.

As you may have noticed, a pathMatch: full configuration is basically saying this:

Ignore my children and only match me. If I am not able to match all of the remaining URL segments myself, then move on.

Redirects

Any Route which has defined a redirectTo will be matched against the target URL according to the same principles. The only difference here is that the redirect is applied as soon as a segment matches. This means that if a redirecting route is using the default prefix strategy, a partial match is enough to cause a redirect. Here's a good example:

const routes: Routes = [
  {
    path: 'not-found',
    component: NotFoundComponent,
  },
  {
    path: 'users',
    redirectTo: 'not-found',
  },
  {
    path: 'users/:userID',
    children: [
      {
        path: 'comments',
        component: UserCommentsComponent,
      },
      {
        path: 'articles',
        component: UserArticlesComponent,
      },
    ],
  },
];

For our initial URL (/users/james/articles), here's what would happen:

  1. 'not-found' !== 'users' - skip it.
  2. 'users' === 'users' - we have a match.
  3. This match has a redirectTo: 'not-found', which is applied immediately.
  4. The target URL changes to not-found.
  5. The router begins matching again and finds a match for not-found right away. The application renders NotFoundComponent.

Now consider what would happen if the users route also had pathMatch: full:

const routes: Routes = [
  {
    path: 'not-found',
    component: NotFoundComponent,
  },
  {
    path: 'users',
    pathMatch: 'full',
    redirectTo: 'not-found',
  },
  {
    path: 'users/:userID',
    children: [
      {
        path: 'comments',
        component: UserCommentsComponent,
      },
      {
        path: 'articles',
        component: UserArticlesComponent,
      },
    ],
  },
];
  1. 'not-found' !== 'users' - skip it.
  2. users would match the first segment of the URL, but the route configuration requires a full match, thus skip it.
  3. 'users/:userID' matches users/james. articles is still not matched but this route has children.
  • We find a match for articles in the children. The whole URL is now matched and the application renders UserArticlesComponent.

Empty path (path: '')

The empty path is a bit of a special case because it can match any segment without "consuming" it (so it's children would have to match that segment again). Consider this example:

const routes: Routes = [
  {
    path: '',
    children: [
      {
        path: 'users',
        component: BadUsersComponent,
      }
    ]
  },
  {
    path: 'users',
    component: GoodUsersComponent,
  },
];

Let's say we are trying to access /users:

  • path: '' will always match, thus the route matches. However, the whole URL has not been matched - we still need to match users!
  • We can see that there is a child users, which matches the remaining (and only!) segment and we have a full match. The application renders BadUsersComponent.

Now back to the original question

The OP used this router configuration:

const routes: Routes = [
  {
    path: 'welcome',
    component: WelcomeComponent,
  },
  {
    path: '',
    redirectTo: 'welcome',
    pathMatch: 'full',
  },
  {
    path: '**',
    redirectTo: 'welcome',
    pathMatch: 'full',
  },
];

If we are navigating to the root URL (/), here's how the router would resolve that:

  1. welcome does not match an empty segment, so skip it.
  2. path: '' matches the empty segment. It has a pathMatch: 'full', which is also satisfied as we have matched the whole URL (it had a single empty segment).
  3. A redirect to welcome happens and the application renders WelcomeComponent.

What if there was no pathMatch: 'full'?

Actually, one would expect the whole thing to behave exactly the same. However, Angular explicitly prevents such a configuration ({ path: '', redirectTo: 'welcome' }) because if you put this Route above welcome, it would theoretically create an endless loop of redirects. So Angular just throws an error, which is why the application would not work at all! (https://angular.io/api/router/Route#pathMatch)

Actually, this does not make too much sense to me because Angular also has implemented a protection against such endless redirects - it only runs a single redirect per routing level! This would stop all further redirects (as you'll see in the example below).

What about path: '**'?

path: '**' will match absolutely anything (af/frewf/321532152/fsa is a match) with or without a pathMatch: 'full'.

Also, since it matches everything, the root path is also included, which makes { path: '', redirectTo: 'welcome' } completely redundant in this setup.

Funnily enough, it is perfectly fine to have this configuration:

const routes: Routes = [
  {
    path: '**',
    redirectTo: 'welcome'
  },
  {
    path: 'welcome',
    component: WelcomeComponent,
  },
];

If we navigate to /welcome, path: '**' will be a match and a redirect to welcome will happen. Theoretically this should kick off an endless loop of redirects but Angular stops that immediately (because of the protection I mentioned earlier) and the whole thing works just fine.

What are Aggregates and PODs and how/why are they special?

What changes in

Following the rest of the clear theme of this question, the meaning and use of aggregates continues to change with every standard. There are several key changes on the horizon.

Types with user-declared constructors P1008

In C++17, this type is still an aggregate:

struct X {
    X() = delete;
};

And hence, X{} still compiles because that is aggregate initialization - not a constructor invocation. See also: When is a private constructor not a private constructor?

In C++20, the restriction will change from requiring:

no user-provided, explicit, or inherited constructors

to

no user-declared or inherited constructors

This has been adopted into the C++20 working draft. Neither the X here nor the C in the linked question will be aggregates in C++20.

This also makes for a yo-yo effect with the following example:

class A { protected: A() { }; };
struct B : A { B() = default; };
auto x = B{};

In C++11/14, B was not an aggregate due to the base class, so B{} performs value-initialization which calls B::B() which calls A::A(), at a point where it is accessible. This was well-formed.

In C++17, B became an aggregate because base classes were allowed, which made B{} aggregate-initialization. This requires copy-list-initializing an A from {}, but from outside the context of B, where it is not accessible. In C++17, this is ill-formed (auto x = B(); would be fine though).

In C++20 now, because of the above rule change, B once again ceases to be an aggregate (not because of the base class, but because of the user-declared default constructor - even though it's defaulted). So we're back to going through B's constructor, and this snippet becomes well-formed.

Initializing aggregates from a parenthesized list of values P960

A common issue that comes up is wanting to use emplace()-style constructors with aggregates:

struct X { int a, b; };
std::vector<X> xs;
xs.emplace_back(1, 2); // error

This does not work, because emplace will try to effectively perform the initialization X(1, 2), which is not valid. The typical solution is to add a constructor to X, but with this proposal (currently working its way through Core), aggregates will effectively have synthesized constructors which do the right thing - and behave like regular constructors. The above code will compile as-is in C++20.

Class Template Argument Deduction (CTAD) for Aggregates P1021 (specifically P1816)

In C++17, this does not compile:

template <typename T>
struct Point {
    T x, y;
};

Point p{1, 2}; // error

Users would have to write their own deduction guide for all aggregate templates:

template <typename T> Point(T, T) -> Point<T>;

But as this is in some sense "the obvious thing" to do, and is basically just boilerplate, the language will do this for you. This example will compile in C++20 (without the need for the user-provided deduction guide).

How to Call Controller Actions using JQuery in ASP.NET MVC

In response to the above post I think it needs this line instead of your line:-

var strMethodUrl = '@Url.Action("SubMenu_Click", "Logging")?param1='+value1+' &param2='+value2

Or else you send the actual strings value1 and value2 to the controller.

However, for me, it only calls the controller once. It seems to hit 'receieveResponse' each time, but a break point on the controller method shows it is only hit 1st time until a page refresh.


Here is a working solution. For the cshtml page:-

   <button type="button" onclick="ButtonClick();"> Call &raquo;</button>

<script>
    function ButtonClick()
    {
        callControllerMethod2("1", "2");
    }
    function callControllerMethod2(value1, value2)
    {
        var response = null;
        $.ajax({
            async: true,
            url: "Logging/SubMenu_Click?param1=" + value1 + " &param2=" + value2,
            cache: false,
            dataType: "json",
            success: function (data) { receiveResponse(data); }
        });
    }
    function receiveResponse(response)
    {
        if (response != null)
        {
            for (var i = 0; i < response.length; i++)
            {
                alert(response[i].Data);
            }
        }
    }
</script>

And for the controller:-

public class A
{
    public string Id { get; set; }
    public string Data { get; set; }

}
public JsonResult SubMenu_Click(string param1, string param2)
{
    A[] arr = new A[] {new A(){ Id = "1", Data = DateTime.Now.Millisecond.ToString() } };
    return Json(arr , JsonRequestBehavior.AllowGet);
}

You can see the time changing each time it is called, so there is no caching of the values...

How to convert int to Integer

I had a similar problem . For this you can use a Hashmap which takes "string" and "object" as shown in code below:

/** stores the image database icons */
public static int[] imageIconDatabase = { R.drawable.ball,
        R.drawable.catmouse, R.drawable.cube, R.drawable.fresh,
        R.drawable.guitar, R.drawable.orange, R.drawable.teapot,
        R.drawable.india, R.drawable.thailand, R.drawable.netherlands,
        R.drawable.srilanka, R.drawable.pakistan,

};
private void initializeImageList() {
    // TODO Auto-generated method stub
    for (int i = 0; i < imageIconDatabase.length; i++) {
        map = new HashMap<String, Object>();

        map.put("Name", imageNameDatabase[i]);
        map.put("Icon", imageIconDatabase[i]);
    }

}

IN vs ANY operator in PostgreSQL

(Neither IN nor ANY is an "operator". A "construct" or "syntax element".)

Logically, quoting the manual:

IN is equivalent to = ANY.

But there are two syntax variants of IN and two variants of ANY. Details:

IN taking a set is equivalent to = ANY taking a set, as demonstrated here:

But the second variant of each is not equivalent to the other. The second variant of the ANY construct takes an array (must be an actual array type), while the second variant of IN takes a comma-separated list of values. This leads to different restrictions in passing values and can also lead to different query plans in special cases:

ANY is more versatile

The ANY construct is far more versatile, as it can be combined with various operators, not just =. Example:

SELECT 'foo' LIKE ANY('{FOO,bar,%oo%}');

For a big number of values, providing a set scales better for each:

Related:

Inversion / opposite / exclusion

"Find rows where id is in the given array":

SELECT * FROM tbl WHERE id = ANY (ARRAY[1, 2]);

Inversion: "Find rows where id is not in the array":

SELECT * FROM tbl WHERE id <> ALL (ARRAY[1, 2]);
SELECT * FROM tbl WHERE id <> ALL ('{1, 2}');  -- equivalent array literal
SELECT * FROM tbl WHERE NOT (id = ANY ('{1, 2}'));

All three equivalent. The first with array constructor, the other two with array literal. The data type can be derived from context unambiguously. Else, an explicit cast may be required, like '{1,2}'::int[].

Rows with id IS NULL do not pass either of these expressions. To include NULL values additionally:

SELECT * FROM tbl WHERE (id = ANY ('{1, 2}')) IS NOT TRUE;

jQuery: click function exclude children.

Here is an example. Green square is parent and yellow square is child element.

Hope that this helps.

_x000D_
_x000D_
var childElementClicked;_x000D_
_x000D_
$("#parentElement").click(function(){_x000D_
_x000D_
  $("#childElement").click(function(){_x000D_
     childElementClicked = true;_x000D_
  });_x000D_
_x000D_
  if( childElementClicked != true ) {_x000D_
_x000D_
   // It is clicked on parent but not on child._x000D_
      // Now do some action that you want._x000D_
      alert('Clicked on parent');_x000D_
   _x000D_
  }else{_x000D_
      alert('Clicked on child');_x000D_
    }_x000D_
    _x000D_
    childElementClicked = false;_x000D_
 _x000D_
});
_x000D_
#parentElement{_x000D_
width:200px;_x000D_
height:200px;_x000D_
background-color:green;_x000D_
position:relative;_x000D_
}_x000D_
_x000D_
#childElement{_x000D_
margin-top:50px;_x000D_
margin-left:50px;_x000D_
width:100px;_x000D_
height:100px;_x000D_
background-color:yellow;_x000D_
position:absolute;_x000D_
}
_x000D_
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>_x000D_
<div id="parentElement">_x000D_
  <div id="childElement">_x000D_
  </div>_x000D_
</div>
_x000D_
_x000D_
_x000D_

Difference between Math.Floor() and Math.Truncate()

Some examples:

Round(1.5) = 2
Round(2.5) = 2
Round(1.5, MidpointRounding.AwayFromZero) = 2
Round(2.5, MidpointRounding.AwayFromZero) = 3
Round(1.55, 1) = 1.6
Round(1.65, 1) = 1.6
Round(1.55, 1, MidpointRounding.AwayFromZero) = 1.6
Round(1.65, 1, MidpointRounding.AwayFromZero) = 1.7

Truncate(2.10) = 2
Truncate(2.00) = 2
Truncate(1.90) = 1
Truncate(1.80) = 1

AltGr key not working, instead I have to use Ctrl+AltGr

I found a solution for my problem while writing my question !

Going into my remote session i tried two key combinations, and it solved the problem on my Desktop : Alt+Enter and Ctrl+Enter (i don't know which one solved the problem though)

I tried to reproduce the problem, but i couldn't... but i'm almost sure it's one of the key combinations described in the question above (since i experienced this problem several times)

So it seems the problem comes from the use of RDP (windows7 and 8)

Update 2017: Problem occurs on Windows 10 aswell.

How can I get the size of an std::vector as an int?

In the first two cases, you simply forgot to actually call the member function (!, it's not a value) std::vector<int>::size like this:

#include <vector>

int main () {
    std::vector<int> v;
    auto size = v.size();
}

Your third call

int size = v.size();

triggers a warning, as not every return value of that function (usually a 64 bit unsigned int) can be represented as a 32 bit signed int.

int size = static_cast<int>(v.size());

would always compile cleanly and also explicitly states that your conversion from std::vector::size_type to int was intended.

Note that if the size of the vector is greater than the biggest number an int can represent, size will contain an implementation defined (de facto garbage) value.

Failure [INSTALL_FAILED_INVALID_APK]

By turning off the Instant Run solved my issue. Don't know any explanation till now. After migrating to android studio 3.0 it starts problem like this. Hope this helps someone in future.

Should URL be case sensitive?

I am not a fan of bumping old articles but because this was one of the first responses for this particular issue I felt a need to clarify something.

As @Bhavin Shah answer states the domain part of the url is case insensitive, so

http://google.com 

and

http://GOOGLE.COM 

and

http://GoOgLe.CoM 

are all the same but everything after the domain name part is considered case sensitive.

so...

http://GOOGLE.COM/ABOUT

and

http://GOOGLE.COM/about

are different.

Note: I am talking "technically" and not "literally" in a lot of cases, most actually, servers are setup to handle these items the same, but it is possible to set them up so they are NOT handled the same.

Different servers handle this differently and in some cases they Have to be case sensitive. In many cases query string values are encoded (such as Session Ids or Base64 encoded data thats passed as a query string value) These items are case sensitive by their nature so the server has to be case sensitive in handling them.

So to answer the question, "should" servers be case sensitive in grabbing this data, the answer is "yes, most definitely."

Of course not everything needs to be case sensitive but the server should be aware of what that is and how to handle those cases.


@Hart Simha's comment basically says the same thing. I missed it before I posted so I want to give credit where credit is due.

How can I convert a PFX certificate file for use with Apache on a linux server?

Additionally to

openssl pkcs12 -in domain.pfx -clcerts -nokeys -out domain.cer
openssl pkcs12 -in domain.pfx -nocerts -nodes  -out domain.key

I also generated Certificate Authority (CA) certificate:

openssl pkcs12 -in domain.pfx -out domain-ca.crt -nodes -nokeys -cacerts

And included it in Apache config file:

<VirtualHost 192.168.0.1:443>
 ...
 SSLEngine on
 SSLCertificateFile /path/to/domain.cer
 SSLCertificateKeyFile /path/to/domain.key
 SSLCACertificateFile /path/to/domain-ca.crt
 ...
</VirtualHost>

Any way to generate ant build.xml file automatically from Eclipse?

If all you need is the classpath entries, I do something like the following to use the eclipse build path.

<xmlproperty file=".classpath" collapseAttributes="true" delimiter=";" />

Then set that value in the path

<path id="eclipse.classpath">
    <pathelement path="${classpath.classpathentry.path}"/>
</path>


<target name="compile" depends="init">

    <javac srcdir="${src}" destdir="${build}" updatedProperty="compiled">
        <classpath refid="eclipse.classpath"/>
    </javac>
</target>

Kill detached screen session

screen -wipe

Should clean all dead screen sessions.

Max length UITextField

Update for Swift 4

 func textField(_ textField: UITextField, shouldChangeCharactersIn range: NSRange, replacementString string: String) -> Bool {
     guard let text = textField.text else { return true }
     let newLength = text.count + string.count - range.length
     return newLength <= 10
}

How to enable core dump in my Linux C++ program

You can do it this way inside a program:

#include <sys/resource.h>

// core dumps may be disallowed by parent of this process; change that
struct rlimit core_limits;
core_limits.rlim_cur = core_limits.rlim_max = RLIM_INFINITY;
setrlimit(RLIMIT_CORE, &core_limits);

How to make the first option of <select> selected with jQuery

If you are going to use the first option as a default like

<select>
    <option value="">Please select an option below</option>
    ...

then you can just use:

$('select').val('');

It is nice and simple.

React Router Pass Param to Component

Here's typescript version. works on "react-router-dom": "^4.3.1"

export const AppRouter: React.StatelessComponent = () => {
    return (
        <BrowserRouter>
            <Switch>
                <Route exact path="/problem/:problemId" render={props => <ProblemPage {...props.match.params} />} />
                <Route path="/" exact component={App} />
            </Switch>
        </BrowserRouter>
    );
};

and component

export class ProblemPage extends React.Component<ProblemRouteTokens> {

    public render(): JSX.Element {
        return <div>{this.props.problemId}</div>;
    }
}

where ProblemRouteTokens

export interface ProblemRouteTokens { problemId: string; }

Horizontal list items

You could also use inline blocks to avoid floating elements

<ul>
    <li>
        <a href="#">some item</a>
    </li>
    <li>
        <a href="#">another item</a>
   </li>
</ul>

and then style as:

li{
    /* with fix for IE */
    display:inline;
    display:inline-block;
    zoom:1;
    /*
    additional styles to make it look nice
    */
 }

that way you wont need to float anything, eliminating the need for clearfixes

How do I get the size of a java.sql.ResultSet?

Well, if you have a ResultSet of type ResultSet.TYPE_FORWARD_ONLY you want to keep it that way (and not to switch to a ResultSet.TYPE_SCROLL_INSENSITIVE or ResultSet.TYPE_SCROLL_INSENSITIVE in order to be able to use .last()).

I suggest a very nice and efficient hack, where you add a first bogus/phony row at the top containing the number of rows.

Example

Let's say your query is the following

select MYBOOL,MYINT,MYCHAR,MYSMALLINT,MYVARCHAR
from MYTABLE
where ...blahblah...

and your output looks like

true    65537 "Hey" -32768 "The quick brown fox"
false  123456 "Sup"    300 "The lazy dog"
false -123123 "Yo"       0 "Go ahead and jump"
false       3 "EVH"    456 "Might as well jump"
...
[1000 total rows]

Simply refactor your code to something like this:

Statement s=myConnection.createStatement(ResultSet.TYPE_FORWARD_ONLY,
                                         ResultSet.CONCUR_READ_ONLY);
String from_where="FROM myTable WHERE ...blahblah... ";
//h4x
ResultSet rs=s.executeQuery("select count(*)as RECORDCOUNT,"
                           +       "cast(null as boolean)as MYBOOL,"
                           +       "cast(null as int)as MYINT,"
                           +       "cast(null as char(1))as MYCHAR,"
                           +       "cast(null as smallint)as MYSMALLINT,"
                           +       "cast(null as varchar(1))as MYVARCHAR "
                           +from_where
                           +"UNION ALL "//the "ALL" part prevents internal re-sorting to prevent duplicates (and we do not want that)
                           +"select cast(null as int)as RECORDCOUNT,"
                           +       "MYBOOL,MYINT,MYCHAR,MYSMALLINT,MYVARCHAR "
                           +from_where);

Your query output will now be something like

1000 null     null null    null null
null true    65537 "Hey" -32768 "The quick brown fox"
null false  123456 "Sup"    300 "The lazy dog"
null false -123123 "Yo"       0 "Go ahead and jump"
null false       3 "EVH"    456 "Might as well jump"
...
[1001 total rows]

So you just have to

if(rs.next())
    System.out.println("Recordcount: "+rs.getInt("RECORDCOUNT"));//hack: first record contains the record count
while(rs.next())
    //do your stuff

Method to Add new or update existing item in Dictionary

There's no problem. I would even remove the CreateNewOrUpdateExisting from the source and use map[key] = value directly in your code, because this this is much more readable, because developers would usually know what map[key] = value means.

Convert text into number in MySQL query

To get number try with SUBSTRING_INDEX(field, '-', 1) then convert.

Display date in dd/mm/yyyy format in vb.net

First, uppercase MM are months and lowercase mm are minutes.

You have to pass CultureInfo.InvariantCulture to ToString to ensure that / as date separator is used since it would normally be replaced with the current culture's date separator:

MsgBox(dt.ToString("dd/MM/yyyy", CultureInfo.InvariantCulture))

Another option is to escape that custom format specifier by embedding the / within ':

dt.ToString("dd'/'MM'/'yyyy")

MSDN: The "/" Custom Format Specifier:

The "/" custom format specifier represents the date separator, which is used to differentiate years, months, and days. The appropriate localized date separator is retrieved from the DateTimeFormatInfo.DateSeparator property of the current or specified culture.

How to use View.OnTouchListener instead of onClick

OnClick is triggered when the user releases the button. But if you still want to use the TouchListener you need to add it in code. It's just:

myView.setOnTouchListener(new View.OnTouchListener()
{
    // Implementation;
});

How to make canvas responsive

<div class="outer">
 <canvas id="canvas"></canvas>
</div>    
.outer {
 position: relative;
 width: 100%;
 padding-bottom: 100%;
}

#canvas {
 position: absolute;
 width: 100%;
 height: 100%;
}

How to upload files on server folder using jsp

You cannot upload like this.

http://grand-shopping.com/<"some folder">

You need a physical path exactly like in your local

C:/Users/puneet verma/Downloads/

What you can do is create some local path where your server is working. Hence you can store and retrieve the file. If you bought some domain from any websites there will be path to upload the files. You create these variable as static constant and use it based on the server you are working (Local/Website).

jquery change class name

So you want to change it WHEN it's clicked...let me go through the whole process. Let's assume that your "External DOM Object" is an input, like a select:

Let's start with this HTML:

<body>
  <div>
    <select id="test">
      <option>Bob</option>
      <option>Sam</option>
      <option>Sue</option>
      <option>Jen</option>
    </select>
  </div>

  <table id="theTable">
    <tr><td id="cellToChange">Bob</td><td>Sam</td></tr>
    <tr><td>Sue</td><td>Jen</td></tr>
  </table>
</body>

Some very basic CSS:

?#theTable td {
    border:1px solid #555;
}
.activeCell {
    background-color:#F00;
}

And set up a jQuery event:

function highlightCell(useVal){
  $("#theTable td").removeClass("activeCell")
      .filter(":contains('"+useVal+"')").addClass("activeCell");
}

$(document).ready(function(){
    $("#test").change(function(e){highlightCell($(this).val())});
});

Now, whenever you pick something from the select, it will automatically find a cell with the matching text, allowing you to subvert the whole id-based process. Of course, if you wanted to do it that way, you could easily modify the script to use IDs rather than values by saying

.filter("#"+useVal)

and make sure to add the ids appropriately. Hope this helps!

Bootstrap - Uncaught TypeError: Cannot read property 'fn' of undefined

//Call .noConflict() to restore JQuery reference.

jQuery.noConflict(); OR  $.noConflict();

//Do something with jQuery.

jQuery( "div.class" ).hide(); OR $( "div.class" ).show();

How to check if std::map contains a key without doing insert?

Your desideratum,map.contains(key), is scheduled for the draft standard C++2a. In 2017 it was implemented by gcc 9.2. It's also in the current clang.

ssh: check if a tunnel is alive

Netcat is your friend:

nc -z localhost 6000 || echo "no tunnel open"

BAT file to map to network drive without running as admin

This .vbs code creates a .bat file with the current mapped network drives. Then, just put the created file into the machine which you want to re-create the mappings and double-click it. It will try to create all mappings using the same drive letters (errors can occur if any letter is in use). This method also can be used as a backup of the current mappings. Save the code bellow as a .vbs file (e.g. Mappings.vbs) and double-click it.

' ********** My Code **********
Set wshShell = CreateObject( "WScript.Shell" )

' ********** Get ComputerName
strComputer = wshShell.ExpandEnvironmentStrings( "%COMPUTERNAME%" )

' ********** Get Domain 
sUserDomain = createobject("wscript.network").UserDomain

Set Connect = GetObject("winmgmts://"&strComputer)
Set WshNetwork = WScript.CreateObject("WScript.Network")
Set oDrives = WshNetwork.EnumNetworkDrives
Set oPrinters = WshNetwork.EnumPrinterConnections

' ********** Current Path
sCurrentPath = CreateObject("Scripting.FileSystemObject").GetParentFolderName(WScript.ScriptFullName)

' ********** Blank the report message
strMsg = ""

' ********** Set objects 
Set objWMIService = GetObject("winmgmts:" & "{impersonationLevel=impersonate}!\\" & strComputer & "\root\cimv2")
Set objWbem = GetObject("winmgmts:")
Set objRegistry = GetObject("winmgmts://" & strComputer & "/root/default:StdRegProv")

' ********** Get UserName
sUser = CreateObject("WScript.Network").UserName

' ********** Print user and computer
'strMsg = strMsg & "    User: " & sUser & VbCrLf
'strMsg = strMsg & "Computer: " & strComputer & VbCrLf & VbCrLf

strMsg = strMsg & "###  COPIED FROM " & strComputer & " ###" & VbCrLf& VbCrLf
strMsg = strMsg & "@echo off" & vbCrLf

For i = 0 to oDrives.Count - 1 Step 2
    strMsg = strMsg & "net use " & oDrives.Item(i) & " " & oDrives.Item(i+1) & " /user:" & sUserDomain & "\" & sUser & " /persistent:yes" & VbCrLf
Next
strMsg = strMsg & ":exit" & VbCrLf
strMsg = strMsg & "@pause" & VbCrLf

' ********** write the file to disk.
strDirectory = sCurrentPath 
Set objFSO = CreateObject("Scripting.FileSystemObject")
If objFSO.FolderExists(strDirectory) Then
    ' Procede
Else
    Set objFolder = objFSO.CreateFolder(strDirectory)
End if

' ********** Calculate date serial for filename **********
intMonth = month(now)
if intMonth < 10 then
    strThisMonth = "0" & intMonth
else
    strThisMonth = intMOnth
end if
intDay = Day(now)
if intDay < 10 then
    strThisDay = "0" & intDay
else
    strThisDay = intDay
end if
strFilenameDateSerial = year(now) & strThisMonth & strThisDay
    sFileName = strDirectory & "\" & strComputer & "_" & sUser & "_MappedDrives" & "_" & strFilenameDateSerial & ".bat"
    Set objFile = objFSO.CreateTextFile(sFileName,True) 
objFile.Write strMsg & vbCrLf

' ********** Ask to view file
strFinish = "End: A .bat was generated. " & VbCrLf & "Copy the generated file  (" & sFileName & ")  into the machine where you want to recreate the mappings and double-click it." & VbCrLf & VbCrLf 
MsgBox(strFinish)

What is "string[] args" in Main class for?

Besides the other answers. You should notice these args can give you the file path that was dragged and dropped on the .exe file. i.e if you drag and drop any file on your .exe file then the application will be launched and the arg[0] will contain the file path that was dropped onto it.

static void Main(string[] args)
{
    Console.WriteLine(args[0]);
}

this will print the path of the file dropped on the .exe file. e.g

C:\Users\ABCXYZ\source\repos\ConsoleTest\ConsoleTest\bin\Debug\ConsoleTest.pdb

Hence, looping through the args array will give you the path of all the files that were selected and dragged and dropped onto the .exe file of your console app. See:

static void Main(string[] args)
{
    foreach (var arg in args)
    {
        Console.WriteLine(arg);
    }
    Console.ReadLine();
}

The code sample above will print all the file names that were dragged and dropped onto it, See I am dragging 5 files onto my ConsoleTest.exe app.

5 Files being dropped on the ConsoleTest.exe file. And here is the output that I get after that: Output

Take nth column in a text file

iirc :

cat filename.txt | awk '{ print $2 $4 }'

or, as mentioned in the comments :

awk '{ print $2 $4 }' filename.txt

Scanning Java annotations at runtime

Google Reflections seems to be much faster than Spring. Found this feature request that adresses this difference: http://www.opensaga.org/jira/browse/OS-738

This is a reason to use Reflections as startup time of my application is really important during development. Reflections seems also to be very easy to use for my use case (find all implementers of an interface).

How do I print a double value without scientific notation using Java?

For integer values represented by a double, you can use this code, which is much faster than the other solutions.

public static String doubleToString(final double d) {
    // check for integer, also see https://stackoverflow.com/a/9898613/868941 and
    // https://github.com/google/guava/blob/master/guava/src/com/google/common/math/DoubleMath.java
    if (isMathematicalInteger(d)) {
        return Long.toString((long)d);
    } else {
        // or use any of the solutions provided by others, this is the best
        DecimalFormat df = 
            new DecimalFormat("0", DecimalFormatSymbols.getInstance(Locale.ENGLISH));
        df.setMaximumFractionDigits(340); // 340 = DecimalFormat.DOUBLE_FRACTION_DIGITS
        return df.format(d);
    }
}

// Java 8+
public static boolean isMathematicalInteger(final double d) {
    return StrictMath.rint(d) == d && Double.isFinite(d);
}

What is the preferred/idiomatic way to insert into a map?

As of C++11, you have two major additional options. First, you can use insert() with list initialization syntax:

function.insert({0, 42});

This is functionally equivalent to

function.insert(std::map<int, int>::value_type(0, 42));

but much more concise and readable. As other answers have noted, this has several advantages over the other forms:

  • The operator[] approach requires the mapped type to be assignable, which isn't always the case.
  • The operator[] approach can overwrite existing elements, and gives you no way to tell whether this has happened.
  • The other forms of insert that you list involve an implicit type conversion, which may slow your code down.

The major drawback is that this form used to require the key and value to be copyable, so it wouldn't work with e.g. a map with unique_ptr values. That has been fixed in the standard, but the fix may not have reached your standard library implementation yet.

Second, you can use the emplace() method:

function.emplace(0, 42);

This is more concise than any of the forms of insert(), works fine with move-only types like unique_ptr, and theoretically may be slightly more efficient (although a decent compiler should optimize away the difference). The only major drawback is that it may surprise your readers a little, since emplace methods aren't usually used that way.

Import pandas dataframe column as string not int

Just want to reiterate this will work in pandas >= 0.9.1:

In [2]: read_csv('sample.csv', dtype={'ID': object})
Out[2]: 
                           ID
0  00013007854817840016671868
1  00013007854817840016749251
2  00013007854817840016754630
3  00013007854817840016781876
4  00013007854817840017028824
5  00013007854817840017963235
6  00013007854817840018860166

I'm creating an issue about detecting integer overflows also.

EDIT: See resolution here: https://github.com/pydata/pandas/issues/2247

Update as it helps others:

To have all columns as str, one can do this (from the comment):

pd.read_csv('sample.csv', dtype = str)

To have most or selective columns as str, one can do this:

# lst of column names which needs to be string
lst_str_cols = ['prefix', 'serial']
# use dictionary comprehension to make dict of dtypes
dict_dtypes = {x : 'str'  for x in lst_str_cols}
# use dict on dtypes
pd.read_csv('sample.csv', dtype=dict_dtypes)

Style the first <td> column of a table differently

The :nth-child() and :nth-of-type() pseudo-classes allows you to select elements with a formula.

The syntax is :nth-child(an+b), where you replace a and b by numbers of your choice.

For instance, :nth-child(3n+1) selects the 1st, 4th, 7th etc. child.

td:nth-child(3n+1) {  
  /* your stuff here */
}

:nth-of-type() works the same, except that it only considers element of the given type ( in the example).

How to properly validate input values with React.JS?

I have used redux-form and formik in the past, and recently React introduced Hook, and i have built a custom hook for it. Please check it out and see if it make your form validation much easier.

Github: https://github.com/bluebill1049/react-hook-form

Website: http://react-hook-form.now.sh

with this approach, you are no longer doing controlled input too.

example below:

import React from 'react'
import useForm from 'react-hook-form'

function App() {
  const { register, handleSubmit, errors } = useForm() // initialise the hook
  const onSubmit = (data) => { console.log(data) } // callback when validation pass

  return (
    <form onSubmit={handleSubmit(onSubmit)}>
      <input name="firstname" ref={register} /> {/* register an input */}

      <input name="lastname" ref={register({ required: true })} /> {/* apply required validation */}
      {errors.lastname && 'Last name is required.'} {/* error message */}

      <input name="age" ref={register({ pattern: /\d+/ })} /> {/* apply a Refex validation */}
      {errors.age && 'Please enter number for age.'} {/* error message */}

      <input type="submit" />
    </form>
  )
}

Auto Increment after delete in MySQL

What you are trying to do is very dangerous. Think about this carefully. There is a very good reason for the default behaviour of auto increment.

Consider this:

A record is deleted in one table that has a relationship with another table. The corresponding record in the second table cannot be deleted for auditing reasons. This record becomes orphaned from the first table. If a new record is inserted into the first table, and a sequential primary key is used, this record is now linked to the orphan. Obviously, this is bad. By using an auto incremented PK, an id that has never been used before is always guaranteed. This means that orphans remain orphans, which is correct.

Failed to execute removeChild on Node

The direct parent of your child is markerDiv, so you should call remove from markerDiv as so:

markerDiv.removeChild(myCoolDiv);

Alternatively, you may want to remove markerNode. Since that node was appended directly to videoContainer, it can be removed with:

document.getElementById("playerContainer").removeChild(markerDiv);

Now, the easiest general way to remove a node, if you are absolutely confident that you did insert it into the DOM, is this:

markerDiv.parentNode.removeChild(markerDiv);

This works for any node (just replace markerDiv with a different node), and finds the parent of the node directly in order to call remove from it. If you are unsure if you added it, double check if the parentNode is non-null before calling removeChild.

Angular 2 TypeScript how to find element in Array

Transform the data structure to a map if you frequently use this search

mapPersons: Map<number, Person>;

// prepare the map - call once or when person array change
populateMap() : void {
    this.mapPersons = new Map();
    for (let o of this.personService.getPersons()) this.mapPersons.set(o.id, o);
}
getPerson(id: number) : Person {
    return this.mapPersons.get(id);
}

Raise to power in R

1: No difference. It is kept around to allow old S-code to continue to function. This is documented a "Note" in ?Math

2: Yes: But you already know it:

`^`(x,y)
#[1] 1024

In R the mathematical operators are really functions that the parser takes care of rearranging arguments and function names for you to simulate ordinary mathematical infix notation. Also documented at ?Math.

Edit: Let me add that knowing how R handles infix operators (i.e. two argument functions) is very important in understanding the use of the foundational infix "[[" and "["-functions as (functional) second arguments to lapply and sapply:

> sapply( list( list(1,2,3), list(4,3,6) ), "[[", 1)
[1] 1 4
> firsts <- function(lis) sapply(lis, "[[", 1)
> firsts( list( list(1,2,3), list(4,3,6) ) )
[1] 1 4

How to find the cumulative sum of numbers in a list?

values = [4, 6, 12]
total  = 0
sums   = []

for v in values:
  total = total + v
  sums.append(total)

print 'Values: ', values
print 'Sums:   ', sums

Running this code gives

Values: [4, 6, 12]
Sums:   [4, 10, 22]

Server Discovery And Monitoring engine is deprecated

If your code includes createConnetion for some reason (In my case I'm using GridFsStorage), try adding the following to your code:

    options: {
        useUnifiedTopology: true,
    }

just after file, like this:

    const storage = new GridFsStorage({
    url: mongodbUrl,
    file: (req, file) => {
        return new Promise((resolve, reject) => {
            crypto.randomBytes(16, (err, buf) => {
                if (err) {
                    return reject(err);
                }
                const filename = buf.toString('hex') + path.extname(file.originalname);
                const fileInfo = {
                    filename: filename,
                    bucketName: 'uploads'
                };
                resolve(fileInfo);
            });
        });
    },
    options: {
        useUnifiedTopology: true,
    }
})

If your case looks like mine, this will surely solve your issue. Regards

How to delete specific columns with VBA?

To answer the question How to delete specific columns in vba for excel. I use Array as below.

sub del_col()

dim myarray as variant
dim i as integer

myarray = Array(10, 9, 8)'Descending to Ascending
For i = LBound(myarray) To UBound(myarray)
    ActiveSheet.Columns(myarray(i)).EntireColumn.Delete
Next i

end sub

How to scroll up or down the page to an anchor using jQuery?

Ok simplest method is : -

Within the click function (Jquery) : -

$('html,body').animate({scrollTop: $("#resultsdiv").offset().top},'slow');

HTML

<div id="resultsdiv">Where I want to scroll to</div>

Is Python faster and lighter than C++?

I think you're reading those stats incorrectly. They show that Python is up to about 400 times slower than C++ and with the exception of a single case, Python is more of a memory hog. When it comes to source size though, Python wins flat out.

My experiences with Python show the same definite trend that Python is on the order of between 10 and 100 times slower than C++ when doing any serious number crunching. There are many reasons for this, the major ones being: a) Python is interpreted, while C++ is compiled; b) Python has no primitives, everything including the builtin types (int, float, etc.) are objects; c) a Python list can hold objects of different type, so each entry has to store additional data about its type. These all severely hinder both runtime and memory consumption.

This is no reason to ignore Python though. A lot of software doesn't require much time or memory even with the 100 time slowness factor. Development cost is where Python wins with the simple and concise style. This improvement on development cost often outweighs the cost of additional cpu and memory resources. When it doesn't, however, then C++ wins.

Eclipse does not start when I run the exe?

If its a Java version problem, you can edit the eclipse.ini file and assign the compatible version to the application through adding these lines:

windows example:

-vm
C:\jdk1.7.0_21\bin\javaw.exe 

for more information: https://wiki.eclipse.org/Eclipse.ini

How to get the width of a react element

This is basically Marco Antônio's answer for a React custom hook, but modified to set the dimensions initially and not only after a resize.

export const useContainerDimensions = myRef => {
  const getDimensions = () => ({
    width: myRef.current.offsetWidth,
    height: myRef.current.offsetHeight
  })

  const [dimensions, setDimensions] = useState({ width: 0, height: 0 })

  useEffect(() => {
    const handleResize = () => {
      setDimensions(getDimensions())
    }

    if (myRef.current) {
      setDimensions(getDimensions())
    }

    window.addEventListener("resize", handleResize)

    return () => {
      window.removeEventListener("resize", handleResize)
    }
  }, [myRef])

  return dimensions;
};

Used in the same way:

const MyComponent = () => {
  const componentRef = useRef()
  const { width, height } = useContainerDimensions(componentRef)

  return (
    <div ref={componentRef}>
      <p>width: {width}px</p>
      <p>height: {height}px</p>
    <div/>
  )
}

How to remove from a map while iterating it?

The C++20 draft contains the convenience function std::erase_if.

So you can use that function to do it as a one-liner.

std::map<K, V> map_obj;
//calls needs_removing for each element and erases it, if true was reuturned
std::erase_if(map_obj,needs_removing);
//if you need to pass only part of the key/value pair
std::erase_if(map_obj,[](auto& kv){return needs_removing(kv.first);});

How can I merge two commits into one if I already started rebase?

Since I use git cherry-pick for just about everything, to me it comes natural to do so even here.

Given that I have branchX checked out and there are two commits at the tip of it, of which I want to create one commit combining their content, I do this:

git checkout HEAD^ // Checkout the privious commit
git cherry-pick --no-commit branchX // Cherry pick the content of the second commit
git commit --amend // Create a new commit with their combined content

If i want to update branchX as well (and I suppose this is the down side of this method) I also have to:

git checkout branchX
git reset --hard <the_new_commit>

Checking if a collection is empty in Java: which is the best method?

Use CollectionUtils.isEmpty(Collection coll)

Null-safe check if the specified collection is empty. Null returns true.

Parameters: coll - the collection to check, may be null

Returns: true if empty or null

How to resolve "git did not exit cleanly (exit code 128)" error on TortoiseGit?

In my case, it was because of the proxy. A proxy was needed in the corporate network and TortoiseGit / Git does not seems to automatically get information from Windows internet settings. Setting up the proxy address solved the issue.

Set language for syntax highlighting in Visual Studio Code

In the very right bottom corner, left to the smiley there was the icon saying "Plain Text". When you click it, the menu with all languages appears where you can choose your desired language.

VSCode

How to open URL in Microsoft Edge from the command line?

I too was wondering why you can't just start microsoftedge.exe, like you do "old-style" applications in windows 10. Searching the web, I found the answer -- it has to do with how Microsoft implemented "Universal Apps".

Below is a brief summary taken from that answer, but I recommend reading the entire entry, because it gives a great explanation of how these "Universal Apps" are being dealt with. Microsoft Edge is not the only app like this we'll be dealing with.

Here's the link: http://www.itworld.com/article/2943955/windows/how-to-script-microsofts-edge-browser.html

Here's the summary from that page:

"Microsoft Edge is a "Modern" Universal app. This means it can't be opened from the command line in the traditional Windows manner:  Executable name followed by command switches/parameter values. But where there's a will, there's a way. In this case, the "way" is known as protocol activation."

Kudos to the author of the article, Stephen Glasskeys.

How do I flush the PRINT buffer in TSQL?

Just for the reference, if you work in scripts (batch processing), not in stored procedure, flushing output is triggered by the GO command, e.g.

print 'test'
print 'test'
go

In general, my conclusion is following: output of mssql script execution, executing in SMS GUI or with sqlcmd.exe, is flushed to file, stdoutput, gui window on first GO statement or until the end of the script.

Flushing inside of stored procedure functions differently, since you can not place GO inside.

Reference: tsql Go statement

How to calculate probability in a normal distribution given mean & standard deviation?

Scipy.stats is a great module. Just to offer another approach, you can calculate it directly using

import math
def normpdf(x, mean, sd):
    var = float(sd)**2
    denom = (2*math.pi*var)**.5
    num = math.exp(-(float(x)-float(mean))**2/(2*var))
    return num/denom

This uses the formula found here: http://en.wikipedia.org/wiki/Normal_distribution#Probability_density_function

to test:

>>> normpdf(7,5,5)  
0.07365402806066466
>>> norm(5,5).pdf(7)
0.073654028060664664

How do I force Internet Explorer to render in Standards Mode and NOT in Quirks?

I know this question was asked over 2 years ago but no one has mentioned this yet.

The best method is to use a http header

Adding the meta tag to the head doesn't always work because IE might have determined the mode before it's read. The best way to make sure IE always uses standards mode is to use a custom http header.

Header:

name: X-UA-Compatible  
value: IE=edge

For example in a .NET application you could put this in the web.config file.

<system.webServer>
    <httpProtocol>
      <customHeaders>
        <add name="X-UA-Compatible" value="IE=edge" />
      </customHeaders>
    </httpProtocol>
</system.webServer>

How to use JavaScript source maps (.map files)?

Just to add to how to use map files. I use chrome for ubuntu and if I go to sources and click on a file, if there is a map file a message comes up telling me that I can view the original file and how to do it.

For the Angular files that I worked with today I click

Ctrl-P and a list of original files comes up in a small window.

I can then browse through the list to view the file that I would like to inspect and check where the issue might be.

How do I install g++ for Fedora?

You should exec:

dnf install gcc-c++

How to convert a string into double and vice versa?

Here's a working sample of NSNumberFormatter reading localized number String (xCode 3.2.4, osX 10.6), to save others the hours I've just spent messing around. Beware: while it can handle trailing blanks such as "8,765.4 ", this cannot handle leading white space and this cannot handle stray text characters. (Bad input strings: " 8" and "8q" and "8 q".)

NSString *tempStr = @"8,765.4";  
     // localization allows other thousands separators, also.
NSNumberFormatter * myNumFormatter = [[NSNumberFormatter alloc] init];
[myNumFormatter setLocale:[NSLocale currentLocale]]; // happen by default?
[myNumFormatter setFormatterBehavior:NSNumberFormatterBehavior10_4];
     // next line is very important!
[myNumFormatter setNumberStyle:NSNumberFormatterDecimalStyle]; // crucial

NSNumber *tempNum = [myNumFormatter numberFromString:tempStr];
NSLog(@"string '%@' gives NSNumber '%@' with intValue '%i'", 
    tempStr, tempNum, [tempNum intValue]);
[myNumFormatter release];  // good citizen

How do I delete everything below row X in VBA/Excel?

This function will clear the sheet data starting from specified row and column :

Sub ClearWKSData(wksCur As Worksheet, iFirstRow As Integer, iFirstCol As Integer)

Dim iUsedCols As Integer
Dim iUsedRows As Integer

iUsedRows = wksCur.UsedRange.Row + wksCur.UsedRange.Rows.Count - 1
iUsedCols = wksCur.UsedRange.Column + wksCur.UsedRange.Columns.Count - 1

If iUsedRows > iFirstRow And iUsedCols > iFirstCol Then
    wksCur.Range(wksCur.Cells(iFirstRow, iFirstCol), wksCur.Cells(iUsedRows, iUsedCols)).Clear
End If

End Sub

Multiple Python versions on the same machine?

Update 2019: Using asdf

These days I suggest using asdf to install various versions of Python interpreters next to each other.

Note1: asdf works not only for Python but for all major languages.

Note2: asdf works fine in combination with popular package-managers such as pipenv and poetry.

If you have asdf installed you can easily download/install new Python interpreters:

# Install Python plugin for asdf:
asdf plugin-add python

# List all available Python interpreters:
asdf list-all python

# Install the Python interpreters that you need:
asdf install python 3.7.4
asdf install python 3.6.9
# etc...

# If you want to define the global version:
asdf global python 3.7.4

# If you want to define the local (project) version:
# (this creates a file .tool-versions in the current directory.)
asdf local python 3.7.4

Old Answer: Install Python from source

If you need to install multiple versions of Python (next to the main one) on Ubuntu / Mint: (should work similar on other Unixs'.)

1) Install Required Packages for source compilation

$ sudo apt-get install build-essential checkinstall
$ sudo apt-get install libreadline-gplv2-dev libncursesw5-dev libssl-dev libsqlite3-dev tk-dev libgdbm-dev libc6-dev libbz2-dev

2) Download and extract desired Python version

Download Python Source for Linux as tarball and move it to /usr/src.

Extract the downloaded package in place. (replace the 'x's with your downloaded version)

$ sudo tar xzf Python-x.x.x.tgz

3) Compile and Install Python Source

$ cd Python-x.x.x
$ sudo ./configure
$ sudo make altinstall

Your new Python bin is now located in /usr/local/bin. You can test the new version:

$ pythonX.X -V
Python x.x.x
$ which pythonX.X
/usr/local/bin/pythonX.X

# Pip is now available for this version as well:
$ pipX.X -V
pip X.X.X from /usr/local/lib/pythonX.X/site-packages (python X.X)

Any free WPF themes?

If you find any ... let me know!

Seriously, as Josh Smith points out in this post, it's amazing there isn't a CodePlex community or something for this. Heck, it is amazing that there aren't more for purchase!

The only one that I have found (for sale) is reuxables. A little pricey, if you ask me, but you do get 9 themes/61 variations.

UPDATE 1:

After I posted my answer, I thought, heck, I should go see if any CodePlex project exists for this already. I didn't find any specific project just for themes, but I did discover the WPF Contrib project ... which does have 1 theme that they never released.

UPDATE 2:

Rudi Grobler (above) just created CodePlex community for this ... starting with converted themes he mentions above. See his blog post for more info. Way to go Rudi!

UPDATE 3:

As another answer below has mentioned, since this question and my answer were written, the WPF Toolkit has incorporated some free themes, in particular, the themes from the Silverlight Toolkit. Rudi's project goes a little further and adds several more ... but depending on your situation, the WPF Toolkit might be all you need (and you might be installing it already).

How to search for a part of a word with ElasticSearch

I think there's no need to change any mapping. Try to use query_string, it's perfect. All scenarios will work with default standard analyzer:

We have data:

{"_id" : "1","name" : "John Doeman","function" : "Janitor"}
{"_id" : "2","name" : "Jane Doewoman","function" : "Teacher"}

Scenario 1:

{"query": {
    "query_string" : {"default_field" : "name", "query" : "*Doe*"}
} }

Response:

{"_id" : "1","name" : "John Doeman","function" : "Janitor"}
{"_id" : "2","name" : "Jane Doewoman","function" : "Teacher"}

Scenario 2:

{"query": {
    "query_string" : {"default_field" : "name", "query" : "*Jan*"}
} }

Response:

{"_id" : "1","name" : "John Doeman","function" : "Janitor"}

Scenario 3:

{"query": {
    "query_string" : {"default_field" : "name", "query" : "*oh* *oe*"}
} }

Response:

{"_id" : "1","name" : "John Doeman","function" : "Janitor"}
{"_id" : "2","name" : "Jane Doewoman","function" : "Teacher"}

EDIT - Same implementation with spring data elastic search https://stackoverflow.com/a/43579948/2357869

One more explanation how query_string is better than others https://stackoverflow.com/a/43321606/2357869

Declaring an HTMLElement Typescript

Note that const declarations are block-scoped.

const el: HTMLElement | null = document.getElementById('content');

if (el) {
  const definitelyAnElement: HTMLElement = el;
}

So the value of definitelyAnElement is not accessible outside of the {}.

(I would have commented above, but I do not have enough Reputation apparently.)

Can you issue pull requests from the command line on GitHub?

A man search like...

man git | grep pull | grep request

gives

git request-pull <start> <url> [<end>]

But, despite the name, it's not what you want. According to the docs:

Generate a request asking your upstream project to pull changes into their tree. The request, printed to the standard output, begins with the branch description, summarizes the changes and indicates from where they can be pulled.

@HolgerJust mentioned the github gem that does what you want:

sudo gem install gh 
gh pull-request [user] [branch]

Others have mentioned the official hub package by github:

sudo apt-get install hub

or

brew install hub 

then

hub pull-request [-focp] [-b <BASE>] [-h <HEAD>]

clear cache of browser by command line

Here is how to clear all trash & caches (without other private data in browsers) by a command line. This is a command line batch script that takes care of all trash (as of April 2014):

erase "%TEMP%\*.*" /f /s /q
for /D %%i in ("%TEMP%\*") do RD /S /Q "%%i"

erase "%TMP%\*.*" /f /s /q
for /D %%i in ("%TMP%\*") do RD /S /Q "%%i"

erase "%ALLUSERSPROFILE%\TEMP\*.*" /f /s /q
for /D %%i in ("%ALLUSERSPROFILE%\TEMP\*") do RD /S /Q "%%i"

erase "%SystemRoot%\TEMP\*.*" /f /s /q
for /D %%i in ("%SystemRoot%\TEMP\*") do RD /S /Q "%%i"


@rem Clear IE cache -  (Deletes Temporary Internet Files Only)
RunDll32.exe InetCpl.cpl,ClearMyTracksByProcess 8
erase "%LOCALAPPDATA%\Microsoft\Windows\Tempor~1\*.*" /f /s /q
for /D %%i in ("%LOCALAPPDATA%\Microsoft\Windows\Tempor~1\*") do RD /S /Q "%%i"

@rem Clear Google Chrome cache
erase "%LOCALAPPDATA%\Google\Chrome\User Data\*.*" /f /s /q
for /D %%i in ("%LOCALAPPDATA%\Google\Chrome\User Data\*") do RD /S /Q "%%i"


@rem Clear Firefox cache
erase "%LOCALAPPDATA%\Mozilla\Firefox\Profiles\*.*" /f /s /q
for /D %%i in ("%LOCALAPPDATA%\Mozilla\Firefox\Profiles\*") do RD /S /Q "%%i"

pause

I am pretty sure it will run for some time when you first run it :) Enjoy!

Hide Spinner in Input Number - Firefox 29

/* for chrome */
    input[type=number]::-webkit-inner-spin-button,
    input[type=number]::-webkit-outer-spin-button {
    -webkit-appearance: none;
    margin: 0;}             


/* for mozilla */  
   input[type=number] {-moz-appearance: textfield;}

Angular ReactiveForms: Producing an array of checkbox values?

If you want to use an Angular reactive form (https://angular.io/guide/reactive-forms).

You can use one form control to manage the outputted value of the group of checkboxes.

component

import { Component } from '@angular/core';
import { FormGroup, FormControl } from '@angular/forms';
import { flow } from 'lodash';
import { flatMap, filter } from 'lodash/fp';

@Component({
  selector: 'multi-checkbox',
  templateUrl: './multi-checkbox.layout.html',
})
export class MultiChecboxComponent  {

  checklistState = [ 
      {
        label: 'Frodo Baggins',
        value: 'frodo_baggins',
        checked: false
      },
      {
        label: 'Samwise Gamgee',
        value: 'samwise_gamgee',
        checked: true,
      },
      {
        label: 'Merry Brandybuck',
        value: 'merry_brandybuck',
        checked: false
      }
    ];

  form = new FormGroup({
    checklist : new FormControl(this.flattenValues(this.checklistState)),
  });


  checklist = this.form.get('checklist');

  onChecklistChange(checked, checkbox) {
    checkbox.checked = checked;
    this.checklist.setValue(this.flattenValues(this.checklistState));
  }

  flattenValues(checkboxes) {
    const flattenedValues = flow([
      filter(checkbox => checkbox.checked),
      flatMap(checkbox => checkbox.value )
    ])(checkboxes)
    return flattenedValues.join(',');
  }
}

html

<form [formGroup]="form">
    <label *ngFor="let checkbox of checklistState" class="checkbox-control">
    <input type="checkbox" (change)="onChecklistChange($event.target.checked, checkbox)" [checked]="checkbox.checked" [value]="checkbox.value" /> {{ checkbox.label }}
  </label>
</form>

checklistState

Manages the model/state of the checklist inputs. This model allows you to map the current state to whatever value format you need.

Model:

{
   label: 'Value 1',
   value: 'value_1',
   checked: false
},
{
  label: 'Samwise Gamgee',
  value: 'samwise_gamgee',
  checked: true,
},
{
  label: 'Merry Brandybuck',
  value: 'merry_brandybuck',
  checked: false
}

checklist Form Control

This control stores the value would like to save as e.g

value output: "value_1,value_2"

See demo at https://stackblitz.com/edit/angular-multi-checklist

How to pause a vbscript execution?

You can use a WScript object and call the Sleep method on it:

Set WScript = CreateObject("WScript.Shell")
WScript.Sleep 2000 'Sleeps for 2 seconds

Another option is to import and use the WinAPI function directly (only works in VBA, thanks @Helen):

Declare Sub Sleep Lib "kernel32" (ByVal dwMilliseconds As Long)
Sleep 2000

How to Navigate from one View Controller to another using Swift

Swift 4

You can switch the screen by pushing navigation controller first of all you have to set the navigation controller with UIViewController

let vc = self.storyboard?.instantiateViewController(withIdentifier: "YourStoryboardID") as! swiftClassName

self.navigationController?.pushViewController(vc, animated: true)

Pass Model To Controller using Jquery/Ajax

Use the following JS:

$(document).ready(function () {
    $("#btnsubmit").click(function () {

             $.ajax({
                 type: "POST",
                 url: '/Plan/PlanManage',     //your action
                 data: $('#PlanForm').serialize(),   //your form name.it takes all the values of model               
                 dataType: 'json',
                 success: function (result) {
                     console.log(result);
                 }
             })
        return false;
    });
});

and the following code on your controller:

[HttpPost]
public string PlanManage(Plan objplan)  //model plan
{
}

ExpressJS How to structure an application?

The following is Peter Lyons' answer verbatim, ported over to vanilla JS from Coffeescript, as requested by several others. Peter's answer is very able, and anyone voting on my answer should vote on his as well.


Config

What you are doing is fine. I like to have my own config namespace set up in a top-level config.js file with a nested namespace like this.

// Set the current environment to true in the env object
var currentEnv = process.env.NODE_ENV || 'development';
exports.appName = "MyApp";
exports.env = {
  production: false,
  staging: false,
  test: false,
  development: false
};  
exports.env[currentEnv] = true;
exports.log = {
  path: __dirname + "/var/log/app_#{currentEnv}.log"
};  
exports.server = {
  port: 9600,
  // In staging and production, listen loopback. nginx listens on the network.
  ip: '127.0.0.1'
};  
if (currentEnv != 'production' && currentEnv != 'staging') {
  exports.enableTests = true;
  // Listen on all IPs in dev/test (for testing from other machines)
  exports.server.ip = '0.0.0.0';
};
exports.db {
  URL: "mongodb://localhost:27017/#{exports.appName.toLowerCase()}_#{currentEnv}"
};

This is friendly for sysadmin editing. Then when I need something, like the DB connection info, it`s

require('./config').db.URL

Routes/Controllers

I like to leave my routes with my controllers and organize them in an app/controllers subdirectory. Then I can load them up and let them add whatever routes they need.

In my app/server.js javascript file I do:

[
  'api',
  'authorization',
  'authentication',
  'domains',
  'users',
  'stylesheets',
  'javascripts',
  'tests',
  'sales'
].map(function(controllerName){
  var controller = require('./controllers/' + controllerName);
  controller.setup(app);
});

So I have files like:

app/controllers/api.js
app/controllers/authorization.js
app/controllers/authentication.js
app/controllers/domains.js

And for example in my domains controller, I have a setup function like this.

exports.setup = function(app) {
  var controller = new exports.DomainController();
  var route = '/domains';
  app.post(route, controller.create);
  app.put(route, api.needId);
  app.delete(route, api.needId);
  route = '/domains/:id';
  app.put(route, controller.loadDomain, controller.update);
  app.del(route, controller.loadDomain, function(req, res){
    res.sendJSON(req.domain, status.OK);
  });
}

Views

Putting views in app/views is becoming the customary place. I lay it out like this.

app/views/layout.jade
app/views/about.jade
app/views/user/EditUser.jade
app/views/domain/EditDomain.jade

Static Files

Go in a public subdirectory.

Github/Semver/NPM

Put a README.md markdown file at your git repo root for github.

Put a package.json file with a semantic version number in your git repo root for NPM.

How to open a file for both reading and writing?

I have tried something like this and it works as expected:

f = open("c:\\log.log", 'r+b')
f.write("\x5F\x9D\x3E")
f.read(100)
f.close()

Where:

f.read(size) - To read a file’s contents, call f.read(size), which reads some quantity of data and returns it as a string.

And:

f.write(string) writes the contents of string to the file, returning None.

Also if you open Python tutorial about reading and writing files you will find that:

'r+' opens the file for both reading and writing.

On Windows, 'b' appended to the mode opens the file in binary mode, so there are also modes like 'rb', 'wb', and 'r+b'.

number several equations with only one number

How about something like:

\documentclass{article}

\usepackage{amssymb,amsmath}

\begin{document}

\begin{equation}\label{A_Label}
  \begin{split}
    w^T x_i + b \geqslant 1-\xi_i \text{ if } y_i &= 1, \\
    w^T x_i + b \leqslant -1+\xi_i \text{ if } y_i &= -1
  \end{split}
\end{equation}

\end{document}

which produces:

enter image description here

generating variable names on fly in python

On an object, you can achieve this with setattr

>>> class A(object): pass
>>> a=A()
>>> setattr(a, "hello1", 5)
>>> a.hello1
5

How to create a POJO?

there are mainly three options are possible for mapping purpose

  1. serialize
  2. XML mapping
  3. POJO mapping.(Plain Old Java Objects)

While using the pojo classes,it is easy for a developer to map with the database. POJO classes are created for database and at the same time value-objects classes are created with getter and setter methods that will easily hold the content.

So,for the purpose of mapping in between java with database, value-objects and POJO classes are implemented.

System.BadImageFormatException: Could not load file or assembly

My cause was different I referenced a web service then I got this message.

Then I changed my target .Net Framework 4.0 to .Net Framework 2.0 and re-refer my webservice. After a few changes problem solved. There is no error worked fine.

hope this helps!

tsconfig.json: Build:No inputs were found in config file

You can also try to restart your code editor. That works well too.

How can I get a list of all functions stored in the database of a particular schema in PostgreSQL?

Run below SQL query to create a view which will show all functions:

CREATE OR REPLACE VIEW show_functions AS
    SELECT routine_name FROM information_schema.routines 
        WHERE routine_type='FUNCTION' AND specific_schema='public';

What is the difference between document.location.href and document.location?

Here is an example of the practical significance of the difference and how it can bite you if you don't realize it (document.location being an object and document.location.href being a string):

We use MonoX Social CMS (http://mono-software.com) free version at http://social.ClipFlair.net and we wanted to add the language bar WebPart at some pages to localize them, but at some others (e.g. at discussions) we didn't want to use localization. So we made two master pages to use at all our .aspx (ASP.net) pages, in the first one we had the language bar WebPart and the other one had the following script to remove the /lng/el-GR etc. from the URLs and show the default (English in our case) language instead for those pages

<script>
  var curAddr = document.location; //MISTAKE
  var newAddr = curAddr.replace(new RegExp("/lng/[a-z]{2}-[A-Z]{2}", "gi"), "");
  if (curAddr != newAddr)
    document.location = newAddr;
</script>

But this code isn't working, replace function just returns Undefined (no exception thrown) so it tries to navigate to say x/lng/el-GR/undefined instead of going to url x. Checking it out with Mozilla Firefox's debugger (F12 key) and moving the cursor over the curAddr variable it was showing lots of info instead of some simple string value for the URL. Selecting Watch from that popup you could see in the watch pane it was writing "Location -> ..." instead of "..." for the url. That made me realize it was an object

One would have expected replace to throw an exception or something, but now that I think of it the problem was that it was trying to call some non-existent "replace" method on the URL object which seems to just give back "undefined" in Javascript.

The correct code in that case is:

<script>
  var curAddr = document.location.href; //CORRECT
  var newAddr = curAddr.replace(new RegExp("/lng/[a-z]{2}-[A-Z]{2}", "gi"), "");
  if (curAddr != newAddr)
    document.location = newAddr;
</script>

Force Intellij IDEA to reread all maven dependencies

If you work in IntelliJ, there are four independent ways to refresh maven repositories. Each of them refreshes another local repository on your computer or refreshes them differently.

1. mvn -U clean install
2. Ctrl+Shift+A - Reimport
3. Round arrows in the Maven window

4. Ctrl+Alt+S , go to Build, Execution, Deployment | Build Tools | Maven | Repositories -choose rep - update

What is interesting, it is often said, that the last refresh is equal to the round arrows in the Maven window. But, according to my experience, they are absolutely different! The proof: Our large project fails the last refresh, but exists and runs happily without it. And double round arrows run OK on it.

Each of these four can help you with your problems or/and find problems of its own. For example, for running our real-life project only the first is necessary, but for testing in IntelliJ we also need 2 and 3. Surely, somebody needs 4, too. (Why else IntelliJ has it?)

Is there a difference between PhoneGap and Cordova commands?

From what I've read (and please correct me if Im wrong):

Phonegap claim that they started trying to make this but couldn't, so they passed it to the Apache Software Foundation.

Apache in their awesomeness (Long live Apache) fixed it, developed it, and made it supremely awesome.

Now Phonegap are trying to maintain and enhance a copy they took back, but keep stuffing it up.

So, by my thinking, I want a solid and trustworthy dev platform made by seasoned professionals that I can trust, rather than a patched upon sub-version of said. Therefore Id say I am a Cordova developer NOT a Phonegap developer.

Iv also read that in a second desperate attempt to gain popularity and control over the great works of Apache, Phonegap has now been sold under the Adobe flag. You know Adobe, they are the guys who do nothing for free and are so bad at maintaining software life-cycles that their apps need to perform updates every time you blink, and for some reason each of their apps are about 100 times the size you would expect.

I guess that is the summary of my research if I didn't read it wrongly.

And if true, then lets all drop this whole Phonegap nonsense and just stick with Cordova.

Running Groovy script from the command line

It will work on Linux kernel 2.6.28 (confirmed on 4.9.x). It won't work on FreeBSD and other Unix flavors.

Your /usr/local/bin/groovy is a shell script wrapping the Java runtime running Groovy.

See the Interpreter Scripts section of EXECVE(2) and EXECVE(2).

jquery: how to get the value of id attribute?

You can also try this way

<option id="opt7" class='select_continent' data-value='7'>Antarctica</option>

jquery

$('.select_continent').click(function () {
alert($(this).data('value'));
});

Good luck !!!!

Is there a destructor for Java?

There is a @Cleanup annotation in Lombok that mostly resembles C++ destructors:

@Cleanup
ResourceClass resource = new ResourceClass();

When processing it (at compilation time), Lombok inserts appropriate try-finally block so that resource.close() is invoked, when execution leaves the scope of the variable. You can also specify explicitly another method for releasing the resource, e.g. resource.dispose():

@Cleanup("dispose")
ResourceClass resource = new ResourceClass();

How to use subList()

Using subList(30, 38); will fail because max index 38 is not available in list, so its not possible.

Only way may be before asking for the sublist, you explicitly determine the max index using list size() method.

for example, check size, which returns 35, so call sublist(30, size());

OR

COPIED FROM pb2q comment

dataList = dataList.subList(30, 38 > dataList.size() ? dataList.size() : 38);

How to Create a Form Dynamically Via Javascript

some thing as follows ::

Add this After the body tag

This is a rough sketch, you will need to modify it according to your needs.

<script>
var f = document.createElement("form");
f.setAttribute('method',"post");
f.setAttribute('action',"submit.php");

var i = document.createElement("input"); //input element, text
i.setAttribute('type',"text");
i.setAttribute('name',"username");

var s = document.createElement("input"); //input element, Submit button
s.setAttribute('type',"submit");
s.setAttribute('value',"Submit");

f.appendChild(i);
f.appendChild(s);

//and some more input elements here
//and dont forget to add a submit button

document.getElementsByTagName('body')[0].appendChild(f);

</script>

Error # 1045 - Cannot Log in to MySQL server -> phpmyadmin

In mysql 5.7 the auth mechanism changed, documentation can be found in the official manual here.

Using the system root user (or sudo) you can connect to the mysql database with the mysql 'root' user via CLI. All other users will work, too.

In phpmyadmin however, all mysql users will work, but not the mysql 'root' user.

This comes from here:

$ mysql -Ne "select Host,User,plugin from mysql.user where user='root';"
+-----------+------+-----------------------+
| localhost | root | auth_socket |
|  hostname | root | mysql_native_password |
+-----------+------+-----------------------+

To 'fix' this security feature, do:

mysql -Ne "update mysql.user set plugin='mysql_native_password' where User='root' and Host='localhost'; flush privileges;"

More on this can also be found here in the manual.

limit text length in php and provide 'Read more' link

This worked for me.

// strip tags to avoid breaking any html
$string = strip_tags($string);
if (strlen($string) > 500) {

    // truncate string
    $stringCut = substr($string, 0, 500);
    $endPoint = strrpos($stringCut, ' ');

    //if the string doesn't contain any space then it will cut without word basis.
    $string = $endPoint? substr($stringCut, 0, $endPoint) : substr($stringCut, 0);
    $string .= '... <a href="/this/story">Read More</a>';
}
echo $string;

Thanks @webbiedave

How to scroll table's "tbody" independent of "thead"?

thead {
  position: fixed;
  height: 10px; /* This is whatever height you want */
}
  tbody {
  position: fixed;
  margin-top: 10px; /* This has to match the height of thead */
  height: 300px; /* This is whatever height you want */
}

for each inside a for each - Java

Your syntax is not correct. It should be like that:

for (Tweet tweet : tweets) {              
    for(long forId : idFromArray){
        long tweetId = tweet.getId();
        if(forId != tweetId){
            String twitterString = tweet.getText();
            db.insertTwitter(twitterString);
        }
    }
}

EDIT

This answer no longer really answers the question since it was updated ;)

Update R using RStudio

Don't use Rstudio to update R. Rstudio IS NOT R, Rstudio is just an IDE. This answer is a summary of previous answers for different OS. For all OS it is convenient to have a look in advance what will happen with the packages you have already installed here.

WINDOWS ->> Open CMD/Powershell as an administrator and type "R" to go into interactive mode. If this does not work, search and run RGui.exe instead of writing R in the console ...and then:

lib_path <- gsub( "/", "\\\\" , Sys.getenv("R_LIBS_USER"))
install.packages("installr", lib = lib_path)
install.packages("stringr", lib_path)
library(stringr, lib.loc = lib_path)
library(installr, lib.loc = lib_path)
installr::updateR()

MacOS ->> You can use updateR package. The package is not on CRAN, so you’ll need to run the following code in Rgui:

install.packages("devtools")
devtools::install_github("AndreaCirilloAC/updateR")
updateR(admin_password = "PASSWORD") # Where "PASSWORD" stands for your system password

Note that it is planned to merge updateR and installR in the near future to work for both Mac and Windows.

Linux ->> For the moment installr is NOT available for Linux/MacOS (see documentation for current version 0.20). As R is installed, you can follow these instructions (in Ubuntu, although the idea is the same in other distros: add the source, update and upgrade and install.)

Tar archiving that takes input from a list of files

Yes:

tar -cvf allfiles.tar -T mylist.txt

Use Font Awesome Icon in Placeholder

Ignoring the jQuery this can be done using ::placeholder of an input element.

<form role="form">
  <div class="form-group">
    <input type="text" class="form-control name" placeholder="&#xF002;"/>
  </div>
</form>

The css part

input.name::placeholder{ font-family:fontAwesome; font-size:[size needed]; color:[placeholder color needed] }

input.name{ font-family:[font family you want to specify] }

THE BEST PART: You can have different font family for placeholder and text

Add list to set?

To add the elements of a list to a set, use update

From https://docs.python.org/2/library/sets.html

s.update(t): return set s with elements added from t

E.g.

>>> s = set([1, 2])
>>> l = [3, 4]
>>> s.update(l)
>>> s
{1, 2, 3, 4}

If you instead want to add the entire list as a single element to the set, you can't because lists aren't hashable. You could instead add a tuple, e.g. s.add(tuple(l)). See also TypeError: unhashable type: 'list' when using built-in set function for more information on that.

Spring MVC 4: "application/json" Content Type is not being set correctly

As other people have commented, because the return type of your method is String Spring won't feel need to do anything with the result.

If you change your signature so that the return type is something that needs marshalling, that should help:

@RequestMapping(value = "/json", method = RequestMethod.GET, produces = "application/json")
@ResponseBody
public Map<String, Object> bar() {
    HashMap<String, Object> map = new HashMap<String, Object>();
    map.put("test", "jsonRestExample");
    return map;
}

BigDecimal setScale and round

One important point that is alluded to but not directly addressed is the difference between "precision" and "scale" and how they are used in the two statements. "precision" is the total number of significant digits in a number. "scale" is the number of digits to the right of the decimal point.

The MathContext constructor only accepts precision and RoundingMode as arguments, and therefore scale is never specified in the first statement.

setScale() obviously accepts scale as an argument, as well as RoundingMode, however precision is never specified in the second statement.

If you move the decimal point one place to the right, the difference will become clear:

// 1.
new BigDecimal("35.3456").round(new MathContext(4, RoundingMode.HALF_UP));
//result = 35.35
// 2.
new BigDecimal("35.3456").setScale(4, RoundingMode.HALF_UP);
// result = 35.3456

Simplest two-way encryption using PHP

Encrypting using openssl_encrypt() The openssl_encrypt function provides a secured and easy way to encrypt your data.

In the script below, we use the AES128 encryption method, but you may consider other kind of encryption method depending on what you want to encrypt.

<?php
$message_to_encrypt = "Yoroshikune";
$secret_key = "my-secret-key";
$method = "aes128";
$iv_length = openssl_cipher_iv_length($method);
$iv = openssl_random_pseudo_bytes($iv_length);

$encrypted_message = openssl_encrypt($message_to_encrypt, $method, $secret_key, 0, $iv);

echo $encrypted_message;
?>

Here is an explanation of the variables used :

message_to_encrypt : the data you want to encrypt secret_key : it is your ‘password’ for encryption. Be sure not to choose something too easy and be careful not to share your secret key with other people method : the method of encryption. Here we chose AES128. iv_length and iv : prepare the encryption using bytes encrypted_message : the variable including your encrypted message

Decrypting using openssl_decrypt() Now you encrypted your data, you may need to decrypt it in order to re-use the message you first included into a variable. In order to do so, we will use the function openssl_decrypt().

<?php
$message_to_encrypt = "Yoroshikune";
$secret_key = "my-secret-key";
$method = "aes128";
$iv_length = openssl_cipher_iv_length($method);
$iv = openssl_random_pseudo_bytes($iv_lenght);
$encrypted_message = openssl_encrypt($message_to_encrypt, $method, $secret_key, 0, $iv);

$decrypted_message = openssl_decrypt($encrypted_message, $method, $secret_key, 0, $iv);

echo $decrypted_message;
?>

The decrypt method proposed by openssl_decrypt() is close to openssl_encrypt().

The only difference is that instead of adding $message_to_encrypt, you will need to add your already encrypted message as the first argument of openssl_decrypt().

That is all you have to do.

Removing App ID from Developer Connection

Update: You can now remove an App ID (as noted by @Guru in the comments).

In the past, this was not possible: I had the same problem, and the folks at Apple replied that they will leave all of the App ID you create associated to your login, to keep track of a sort of history related to your login.

It seems that they finally changed idea about.

Proper way to get page content

get page content by page name:

<?php
$page = get_page_by_title( 'page-name' );
$content = apply_filters('the_content', $page->post_content); 
echo $content;
?>

C# LINQ find duplicates in List

Another way is using HashSet:

var hash = new HashSet<int>();
var duplicates = list.Where(i => !hash.Add(i));

If you want unique values in your duplicates list:

var myhash = new HashSet<int>();
var mylist = new List<int>(){1,1,2,2,3,3,3,4,4,4};
var duplicates = mylist.Where(item => !myhash.Add(item)).Distinct().ToList();

Here is the same solution as a generic extension method:

public static class Extensions
{
  public static IEnumerable<TSource> GetDuplicates<TSource, TKey>(this IEnumerable<TSource> source, Func<TSource, TKey> selector, IEqualityComparer<TKey> comparer)
  {
    var hash = new HashSet<TKey>(comparer);
    return source.Where(item => !hash.Add(selector(item))).ToList();
  }

  public static IEnumerable<TSource> GetDuplicates<TSource>(this IEnumerable<TSource> source, IEqualityComparer<TSource> comparer)
  {
    return source.GetDuplicates(x => x, comparer);      
  }

  public static IEnumerable<TSource> GetDuplicates<TSource, TKey>(this IEnumerable<TSource> source, Func<TSource, TKey> selector)
  {
    return source.GetDuplicates(selector, null);
  }

  public static IEnumerable<TSource> GetDuplicates<TSource>(this IEnumerable<TSource> source)
  {
    return source.GetDuplicates(x => x, null);
  }
}

AttributeError: 'str' object has no attribute 'strftime'

You should use datetime object, not str.

>>> from datetime import datetime
>>> cr_date = datetime(2013, 10, 31, 18, 23, 29, 227)
>>> cr_date.strftime('%m/%d/%Y')
'10/31/2013'

To get the datetime object from the string, use datetime.datetime.strptime:

>>> datetime.strptime(cr_date, '%Y-%m-%d %H:%M:%S.%f')
datetime.datetime(2013, 10, 31, 18, 23, 29, 227)
>>> datetime.strptime(cr_date, '%Y-%m-%d %H:%M:%S.%f').strftime('%m/%d/%Y')
'10/31/2013'

How to execute a query in ms-access in VBA code?

Take a look at this tutorial for how to use SQL inside VBA:

http://www.ehow.com/how_7148832_access-vba-query-results.html

For a query that won't return results, use (reference here):

DoCmd.RunSQL

For one that will, use (reference here):

Dim dBase As Database
dBase.OpenRecordset

preferredStatusBarStyle isn't called

Tyson's answer is correct for changing the status bar color to white in UINavigationController.

If anyone want's to accomplish the same result by writing the code in AppDelegate then use below code and write it inside AppDelegate's didFinishLaunchingWithOptions method.

And don't forget to set the UIViewControllerBasedStatusBarAppearance to YES in the .plist file, else the change will not reflect.

Code

- (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions {
     // status bar appearance code
     [[UINavigationBar appearance] setBarStyle:UIBarStyleBlack];

     return YES;
}

Get a UTC timestamp

As wizzard pointed out, the correct method is,

new Date().getTime();

or under Javascript 1.5, just

Date.now();

From the documentation,

The value returned by the getTime method is the number of milliseconds since 1 January 1970 00:00:00 UTC.

If you wanted to make a time stamp without milliseconds you can use,

Math.floor(Date.now() / 1000);

I wanted to make this an answer so the correct method is more visible.

You can compare ExpExc's and Narendra Yadala's results to the method above at http://jsfiddle.net/JamesFM/bxEJd/, and verify with http://www.unixtimestamp.com/ or by running date +%s on a Unix terminal.

What's the best way to break from nested loops in JavaScript?

Here are five ways to break out of nested loops in JavaScript:

1) Set parent(s) loop to the end

for (i = 0; i < 5; i++)
{
    for (j = 0; j < 5; j++)
    {
        if (j === 2)
        {
            i = 5;
            break;
        }
    }
}

2) Use label

exit_loops:
for (i = 0; i < 5; i++)
{
    for (j = 0; j < 5; j++)
    {
        if (j === 2)
            break exit_loops;
    }
}

3) Use variable

var exit_loops = false;
for (i = 0; i < 5; i++)
{
    for (j = 0; j < 5; j++)
    {
        if (j === 2)
        {
            exit_loops = true;
            break;
        }
    }
    if (exit_loops)
        break;
}

4) Use self executing function

(function()
{
    for (i = 0; i < 5; i++)
    {
        for (j = 0; j < 5; j++)
        {
             if (j === 2)
                 return;
        }
    }
})();

5) Use regular function

function nested_loops()
{
    for (i = 0; i < 5; i++)
    {
        for (j = 0; j < 5; j++)
        {
             if (j === 2)
                 return;
        }
    }
}
nested_loops();

Prolog "or" operator, query

Just another viewpoint. Performing an "or" in Prolog can also be done with the "disjunct" operator or semi-colon:

registered(X, Y) :-
    X = ct101; X = ct102; X = ct103.

For a fuller explanation:

Predicate control in Prolog

Hadoop "Unable to load native-hadoop library for your platform" warning

export HADOOP_HOME=/home/hadoop/hadoop-2.4.1  
export PATH=$HADOOP_HOME/bin:$PATH  
export HADOOP_PREFIX=$HADOOP_HOME  
export HADOOP_COMMON_HOME=$HADOOP_PREFIX  
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_PREFIX/lib/native  
export HADOOP_CONF_DIR=$HADOOP_PREFIX/etc/hadoop  
export HADOOP_HDFS_HOME=$HADOOP_PREFIX  
export HADOOP_MAPRED_HOME=$HADOOP_PREFIX  
export HADOOP_YARN_HOME=$HADOOP_PREFIX  
export JAVA_LIBRARY_PATH=$HADOOP_HOME/lib/native:$JAVA_LIBRARY_PATH

Could not load file or assembly 'Microsoft.ReportViewer.Common, Version=11.0.0.0

I solve it by download the reportviewer.exe and install it. After the installation, all related assemblies will be available in C:\Windows\assembly\GAC_MSIL, then you can refer it in web config

Java: how do I get a class literal from a generic type?

Well as we all know that it gets erased. But it can be known under some circumstances where the type is explicitly mentioned in the class hierarchy:

import java.lang.reflect.*;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.LinkedHashMap;
import java.util.Map;
import java.util.stream.Collectors;

public abstract class CaptureType<T> {
    /**
     * {@link java.lang.reflect.Type} object of the corresponding generic type. This method is useful to obtain every kind of information (including annotations) of the generic type.
     *
     * @return Type object. null if type could not be obtained (This happens in case of generic type whose information cant be obtained using Reflection). Please refer documentation of {@link com.types.CaptureType}
     */
    public Type getTypeParam() {
        Class<?> bottom = getClass();
        Map<TypeVariable<?>, Type> reifyMap = new LinkedHashMap<>();

        for (; ; ) {
            Type genericSuper = bottom.getGenericSuperclass();
            if (!(genericSuper instanceof Class)) {
                ParameterizedType generic = (ParameterizedType) genericSuper;
                Class<?> actualClaz = (Class<?>) generic.getRawType();
                TypeVariable<? extends Class<?>>[] typeParameters = actualClaz.getTypeParameters();
                Type[] reified = generic.getActualTypeArguments();
                assert (typeParameters.length != 0);
                for (int i = 0; i < typeParameters.length; i++) {
                    reifyMap.put(typeParameters[i], reified[i]);
                }
            }

            if (bottom.getSuperclass().equals(CaptureType.class)) {
                bottom = bottom.getSuperclass();
                break;
            }
            bottom = bottom.getSuperclass();
        }

        TypeVariable<?> var = bottom.getTypeParameters()[0];
        while (true) {
            Type type = reifyMap.get(var);
            if (type instanceof TypeVariable) {
                var = (TypeVariable<?>) type;
            } else {
                return type;
            }
        }
    }

    /**
     * Returns the raw type of the generic type.
     * <p>For example in case of {@code CaptureType<String>}, it would return {@code Class<String>}</p>
     * For more comprehensive examples, go through javadocs of {@link com.types.CaptureType}
     *
     * @return Class object
     * @throws java.lang.RuntimeException If the type information cant be obtained. Refer documentation of {@link com.types.CaptureType}
     * @see com.types.CaptureType
     */
    public Class<T> getRawType() {
        Type typeParam = getTypeParam();
        if (typeParam != null)
            return getClass(typeParam);
        else throw new RuntimeException("Could not obtain type information");
    }


    /**
     * Gets the {@link java.lang.Class} object of the argument type.
     * <p>If the type is an {@link java.lang.reflect.ParameterizedType}, then it returns its {@link java.lang.reflect.ParameterizedType#getRawType()}</p>
     *
     * @param type The type
     * @param <A>  type of class object expected
     * @return The Class<A> object of the type
     * @throws java.lang.RuntimeException If the type is a {@link java.lang.reflect.TypeVariable}. In such cases, it is impossible to obtain the Class object
     */
    public static <A> Class<A> getClass(Type type) {
        if (type instanceof GenericArrayType) {
            Type componentType = ((GenericArrayType) type).getGenericComponentType();
            Class<?> componentClass = getClass(componentType);
            if (componentClass != null) {
                return (Class<A>) Array.newInstance(componentClass, 0).getClass();
            } else throw new UnsupportedOperationException("Unknown class: " + type.getClass());
        } else if (type instanceof Class) {
            Class claz = (Class) type;
            return claz;
        } else if (type instanceof ParameterizedType) {
            return getClass(((ParameterizedType) type).getRawType());
        } else if (type instanceof TypeVariable) {
            throw new RuntimeException("The type signature is erased. The type class cant be known by using reflection");
        } else throw new UnsupportedOperationException("Unknown class: " + type.getClass());
    }

    /**
     * This method is the preferred method of usage in case of complex generic types.
     * <p>It returns {@link com.types.TypeADT} object which contains nested information of the type parameters</p>
     *
     * @return TypeADT object
     * @throws java.lang.RuntimeException If the type information cant be obtained. Refer documentation of {@link com.types.CaptureType}
     */
    public TypeADT getParamADT() {
        return recursiveADT(getTypeParam());
    }

    private TypeADT recursiveADT(Type type) {
        if (type instanceof Class) {
            return new TypeADT((Class<?>) type, null);
        } else if (type instanceof ParameterizedType) {
            ArrayList<TypeADT> generic = new ArrayList<>();
            ParameterizedType type1 = (ParameterizedType) type;
            return new TypeADT((Class<?>) type1.getRawType(),
                    Arrays.stream(type1.getActualTypeArguments()).map(x -> recursiveADT(x)).collect(Collectors.toList()));
        } else throw new UnsupportedOperationException();
    }

}

public class TypeADT {
    private final Class<?> reify;
    private final List<TypeADT> parametrized;

    TypeADT(Class<?> reify, List<TypeADT> parametrized) {
        this.reify = reify;
        this.parametrized = parametrized;
    }

    public Class<?> getRawType() {
        return reify;
    }

    public List<TypeADT> getParameters() {
        return parametrized;
    }
}

And now you can do things like:

static void test1() {
        CaptureType<String> t1 = new CaptureType<String>() {
        };
        equals(t1.getRawType(), String.class);
    }

    static void test2() {
        CaptureType<List<String>> t1 = new CaptureType<List<String>>() {
        };
        equals(t1.getRawType(), List.class);
        equals(t1.getParamADT().getParameters().get(0).getRawType(), String.class);
    }


    private static void test3() {
            CaptureType<List<List<String>>> t1 = new CaptureType<List<List<String>>>() {
            };
            equals(t1.getParamADT().getRawType(), List.class);
        equals(t1.getParamADT().getParameters().get(0).getRawType(), List.class);
    }

    static class Test4 extends CaptureType<List<String>> {
    }

    static void test4() {
        Test4 test4 = new Test4();
        equals(test4.getParamADT().getRawType(), List.class);
    }

    static class PreTest5<S> extends CaptureType<Integer> {
    }

    static class Test5 extends PreTest5<Integer> {
    }

    static void test5() {
        Test5 test5 = new Test5();
        equals(test5.getTypeParam(), Integer.class);
    }

    static class PreTest6<S> extends CaptureType<S> {
    }

    static class Test6 extends PreTest6<Integer> {
    }

    static void test6() {
        Test6 test6 = new Test6();
        equals(test6.getTypeParam(), Integer.class);
    }



    class X<T> extends CaptureType<T> {
    }

    class Y<A, B> extends X<B> {
    }

    class Z<Q> extends Y<Q, Map<Integer, List<List<List<Integer>>>>> {
    }

    void test7(){
        Z<String> z = new Z<>();
        TypeADT param = z.getParamADT();
        equals(param.getRawType(), Map.class);
        List<TypeADT> parameters = param.getParameters();
        equals(parameters.get(0).getRawType(), Integer.class);
        equals(parameters.get(1).getRawType(), List.class);
        equals(parameters.get(1).getParameters().get(0).getRawType(), List.class);
        equals(parameters.get(1).getParameters().get(0).getParameters().get(0).getRawType(), List.class);
        equals(parameters.get(1).getParameters().get(0).getParameters().get(0).getParameters().get(0).getRawType(), Integer.class);
    }




    static void test8() throws IllegalAccessException, InstantiationException {
        CaptureType<int[]> type = new CaptureType<int[]>() {
        };
        equals(type.getRawType(), int[].class);
    }

    static void test9(){
        CaptureType<String[]> type = new CaptureType<String[]>() {
        };
        equals(type.getRawType(), String[].class);
    }

    static class SomeClass<T> extends CaptureType<T>{}
    static void test10(){
        SomeClass<String> claz = new SomeClass<>();
        try{
            claz.getRawType();
            throw new RuntimeException("Shouldnt come here");
        }catch (RuntimeException ex){

        }
    }

    static void equals(Object a, Object b) {
        if (!a.equals(b)) {
            throw new RuntimeException("Test failed. " + a + " != " + b);
        }
    }

More info here. But again, it is almost impossible to retrieve for:

class SomeClass<T> extends CaptureType<T>{}
SomeClass<String> claz = new SomeClass<>();

where it gets erased.

Converting array to list in Java

The problem is that varargs got introduced in Java5 and unfortunately, Arrays.asList() got overloaded with a vararg version too. So Arrays.asList(spam) is understood by the Java5 compiler as a vararg parameter of int arrays.

This problem is explained in more details in Effective Java 2nd Ed., Chapter 7, Item 42.

How to change python version in anaconda spyder

If you want to keep python 3, you can follow these directions to create a python 2.7 environment, called py27.

Then you just need to activate py27:

$ conda activate py27

Then you can install spyder on this environment, e.g.:

$ conda install spyder

Then you can start spyder from the command line or navigate to 2.7 version of spyder.exe below the envs directory (e.g. C:\ProgramData\Anaconda3\envs\py27\Scripts)

Convert datetime value into string

This is super old, but I figured I'd add my 2c. DATE_FORMAT does indeed return a string, but I was looking for the CAST function, in the situation that I already had a datetime string in the database and needed to pattern match against it:

http://dev.mysql.com/doc/refman/5.0/en/cast-functions.html

In this case, you'd use:

CAST(date_value AS char)

This answers a slightly different question, but the question title seems ambiguous enough that this might help someone searching.

Run a mySQL query as a cron job?

It depends on what runs cron on your system, but all you have to do to run a php script from cron is to do call the location of the php installation followed by the script location. An example with crontab running every hour:

# crontab -e
00 * * * * /usr/local/bin/php /home/path/script.php

On my system, I don't even have to put the path to the php installation:

00 * * * * php /home/path/script.php

On another note, you should not be using mysql extension because it is deprecated, unless you are using an older installation of php. Read here for a comparison.

Regular expression for first and last name

This regex work for me (was using in Angular 8) :

([a-zA-Z',.-]+( [a-zA-Z',.-]+)*){2,30}

enter image description here

It will be invalid if there is:-

  1. Any whitespace start or end of the name
  2. Got symbols e.g. @
  3. Less than 2 or more than 30

Example invalid First Name (whitespace)

enter image description here

Example valid First Name :

enter image description here

How to turn off INFO logging in Spark?

For PySpark, you can also set the log level in your scripts with sc.setLogLevel("FATAL"). From the docs:

Control our logLevel. This overrides any user-defined log settings. Valid log levels include: ALL, DEBUG, ERROR, FATAL, INFO, OFF, TRACE, WARN

Sorting dictionary keys in python

[v[0] for v in sorted(foo.items(), key=lambda(k,v): (v,k))]

How can multiple rows be concatenated into one in Oracle without creating a stored procedure?

From Oracle 11gR2, the LISTAGG clause should do the trick:

SELECT question_id,
       LISTAGG(element_id, ',') WITHIN GROUP (ORDER BY element_id)
FROM YOUR_TABLE
GROUP BY question_id;

Beware if the resulting string is too big (more than 4000 chars for a VARCHAR2, for instance): from version 12cR2, we can use ON OVERFLOW TRUNCATE/ERROR to deal with this issue.

Dark Theme for Visual Studio 2010 With Productivity Power Tools

You can also try this handy online tool, which generates .vssettings file for you.

Visual Studio Color Theme Generator

Python: "TypeError: __str__ returned non-string" but still prints to output?

I Had the same problem, in my case, was because i was returned a digit:

def __str__(self):
    return self.code

str is waiting for a str, not another.

now work good with:

def __str__(self):
    return self.name

where name is a STRING.

How to make Java honor the DNS Caching Timeout?

To expand on Byron's answer, I believe you need to edit the file java.security in the %JRE_HOME%\lib\security directory to effect this change.

Here is the relevant section:

#
# The Java-level namelookup cache policy for successful lookups:
#
# any negative value: caching forever
# any positive value: the number of seconds to cache an address for
# zero: do not cache
#
# default value is forever (FOREVER). For security reasons, this
# caching is made forever when a security manager is set. When a security
# manager is not set, the default behavior is to cache for 30 seconds.
#
# NOTE: setting this to anything other than the default value can have
#       serious security implications. Do not set it unless 
#       you are sure you are not exposed to DNS spoofing attack.
#
#networkaddress.cache.ttl=-1 

Documentation on the java.security file here.

Where do I find old versions of Android NDK?

If you search Google for the version you want, you should be able to find a download link. For example, Android NDK r5b is available at http://androgeek.info/?p=296

On another note, it might be a good idea to look at why your code doesn't compile against the latest version and fix it.

Which Radio button in the group is checked?

The GroupBox has a Validated event for this purpose, if you are using WinForms.

private void grpBox_Validated(object sender, EventArgs e)
    {
        GroupBox g = sender as GroupBox;
        var a = from RadioButton r in g.Controls
                 where r.Checked == true select r.Name;
        strChecked = a.First();
     }

How to get image width and height in OpenCV?

Also for openCV in python you can do:

img = cv2.imread('myImage.jpg')
height, width, channels = img.shape 

Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2)

Make sure you started the server:

mysql.server start

Then connect with root user:

mysql -uroot

JPA OneToMany and ManyToOne throw: Repeated column in mapping for entity column (should be mapped with insert="false" update="false")

I am not really sure about your question (the meaning of "empty table" etc, or how mappedBy and JoinColumn were not working).

I think you were trying to do a bi-directional relationships.

First, you need to decide which side "owns" the relationship. Hibernate is going to setup the relationship base on that side. For example, assume I make the Post side own the relationship (I am simplifying your example, just to keep things in point), the mapping will look like:

(Wish the syntax is correct. I am writing them just by memory. However the idea should be fine)

public class User{
    @OneToMany(fetch=FetchType.LAZY, cascade = CascadeType.ALL, mappedBy="user")
    private List<Post> posts;
}


public class Post {
    @ManyToOne(fetch=FetchType.LAZY)
    @JoinColumn(name="user_id")
    private User user;
}

By doing so, the table for Post will have a column user_id which store the relationship. Hibernate is getting the relationship by the user in Post (Instead of posts in User. You will notice the difference if you have Post's user but missing User's posts).

You have mentioned mappedBy and JoinColumn is not working. However, I believe this is in fact the correct way. Please tell if this approach is not working for you, and give us a bit more info on the problem. I believe the problem is due to something else.


Edit:

Just a bit extra information on the use of mappedBy as it is usually confusing at first. In mappedBy, we put the "property name" in the opposite side of the bidirectional relationship, not table column name.

Comparing two arrays & get the values which are not common

Look at Compare-Object

Compare-Object $a1 $b1 | ForEach-Object { $_.InputObject }

Or if you would like to know where the object belongs to, then look at SideIndicator:

$a1=@(1,2,3,4,5,8)
$b1=@(1,2,3,4,5,6)
Compare-Object $a1 $b1

Changing background color of text box input not working when empty

You could have the CSS first style the textbox, then have js change it:

<input type="text" style="background-color: yellow;" id="subEmail" />

js:

function changeColor() {
  document.getElementById("subEmail").style.backgroundColor = "Insert color here"
}

How to change ProgressBar's progress indicator color in Android

A simpler solution:

progess_drawable_blue

<?xml version="1.0" encoding="utf-8"?>
<layer-list xmlns:android="http://schemas.android.com/apk/res/android">
<item android:id="@android:id/background">
    <shape>
        <solid
                android:color="@color/disabled" />
    </shape>
</item>

<item
    android:id="@android:id/progress">
    <clip>
        <shape>
            <solid
                android:color="@color/blue" />
        </shape>
    </clip>
</item>

</layer-list>

HTTP 401 - what's an appropriate WWW-Authenticate header value?

When indicating HTTP Basic Authentication we return something like:

WWW-Authenticate: Basic realm="myRealm"

Whereas Basic is the scheme and the remainder is very much dependent on that scheme. In this case realm just provides the browser a literal that can be displayed to the user when prompting for the user id and password.

You're obviously not using Basic however since there is no point having session expiry when Basic Auth is used. I assume you're using some form of Forms based authentication.

From recollection, Windows Challenge Response uses a different scheme and different arguments.

The trick is that it's up to the browser to determine what schemes it supports and how it responds to them.

My gut feel if you are using forms based authentication is to stay with the 200 + relogin page but add a custom header that the browser will ignore but your AJAX can identify.

For a really good User + AJAX experience, get the script to hang on to the AJAX request that found the session expired, fire off a relogin request via a popup, and on success, resubmit the original AJAX request and carry on as normal.

Avoid the cheat that just gets the script to hit the site every 5 mins to keep the session alive cause that just defeats the point of session expiry.

The other alternative is burn the AJAX request but that's a poor user experience.

Is there a way to remove the separator line from a UITableView?

In your viewDidLoad:

self.tableView.tableFooterView = [[UIView alloc] initWithFrame:CGRectZero];
if ([self.tableView respondsToSelector:@selector(setSeparatorInset:)])
{
    [self.tableView setSeparatorInset:UIEdgeInsetsZero];
}

Can we update primary key values of a table?

Short answer: yes you can. Of course you'll have to make sure that the new value doesn't match any existing value and other constraints are satisfied (duh).

What exactly are you trying to do?

How to stop docker under Linux

In my case, it was neither systemd nor a cron job, but it was snap. So I had to run:

sudo snap stop docker
sudo snap remove docker

... and the last command actually never ended, I don't know why: this snap thing is really a pain. So I also ran:

sudo apt purge snap

:-)

Copy and Paste a set range in the next empty row

You could also try this

Private Sub CommandButton1_Click()

Sheets("Sheet1").Range("A3:E3").Copy

Dim lastrow As Long
lastrow = Range("A65536").End(xlUp).Row

Sheets("Summary Info").Activate
Cells(lastrow + 1, 1).PasteSpecial Paste:=xlPasteValues, Operation:=xlNone, SkipBlanks:=False, Transpose:=False

End Sub

How to set a maximum execution time for a mysql query?

Please rewrite your query like

select /*+ MAX_EXECUTION_TIME(1000) */ * from table


this statement will kill your query after the specified time

How to do a scatter plot with empty circles in Python?

So I assume you want to highlight some points that fit a certain criteria. You can use Prelude's command to do a second scatter plot of the hightlighted points with an empty circle and a first call to plot all the points. Make sure the s paramter is sufficiently small for the larger empty circles to enclose the smaller filled ones.

The other option is to not use scatter and draw the patches individually using the circle/ellipse command. These are in matplotlib.patches, here is some sample code on how to draw circles rectangles etc.

<img>: Unsafe value used in a resource URL context

The most elegant way to fix this: use pipe. Here is example (my blog). So you can then simply use url | safe pipe to bypass the security.

<iframe [src]="url | safe"></iframe>

Refer to the documentation on npm for details: https://www.npmjs.com/package/safe-pipe

Copy a table from one database to another in Postgres

You have to use DbLink to copy one table data into another table at different database. You have to install and configure DbLink extension to execute cross database query.

I have already created detailed post on this topic. Please visit this link

Angular 2: How to style host element of the component?

In your Component you can add .class to your host element if you would have some general styles that you want to apply.

export class MyComponent{
     @HostBinding('class') classes = 'classA classB';

android fragment- How to save states of views in a fragment when another fragment is pushed on top of it

I worked with an issue very similar to this. Since I knew I would frequently be returning back to a previous fragment, I checked to see whether the fragment .isAdded() was true, and if so, rather than doing a transaction.replace() I just do a transaction.show(). This keeps the fragment from being recreated if it's already on the stack - no state saving needed.

Fragment target = <my fragment>;
FragmentTransaction transaction = getFragmentManager().beginTransaction();
transaction.setTransition(FragmentTransaction.TRANSIT_FRAGMENT_OPEN);
if(target.isAdded()) {
    transaction.show(target);
} else {
    transaction.addToBackStack(button_id + "stack_item");
    transaction.replace(R.id.page_fragment, target);
}
transaction.commit();

Another thing to keep in mind is that while this preserves the natural order for fragments themselves, you might still need to handle the activity itself being destroyed and recreated on orientation (config) change. To get around this in AndroidManifest.xml for your node:

android:configChanges="orientation|screenSize"

In Android 3.0 and higher, the screenSize is apparently required.

Good luck

How to get the current directory of the cmdlet being executed

Path is often null. This function is safer.

function Get-ScriptDirectory
{
    $Invocation = (Get-Variable MyInvocation -Scope 1).Value;
    if($Invocation.PSScriptRoot)
    {
        $Invocation.PSScriptRoot;
    }
    Elseif($Invocation.MyCommand.Path)
    {
        Split-Path $Invocation.MyCommand.Path
    }
    else
    {
        $Invocation.InvocationName.Substring(0,$Invocation.InvocationName.LastIndexOf("\"));
    }
}

Get jQuery version from inspecting the jQuery object

You can get the version of the jquery by simply printing object.jquery, the object can be any object created by you using $.

For example: if you have created a <div> element as following

var divObj = $("div");

then by printing divObj.jquery will show you the version like 1.7.1

Basically divObj inherits all the property of $() or jQuery() i.e if you try to print jQuery.fn.jquery will also print the same version like 1.7.1

How to Iterate over a Set/HashSet without an Iterator?

You can use an enhanced for loop:

Set<String> set = new HashSet<String>();

//populate set

for (String s : set) {
    System.out.println(s);
}

Or with Java 8:

set.forEach(System.out::println);

How do I make a PHP form that submits to self?

  1. change
    <input type="submit" value="Submit" />
    to
    <input type="submit" value="Submit" name='submit'/>

  2. change
    <form method="post" action="<?php echo $PHP_SELF;?>">
    to
    <form method="post" action="">

  3. It will perform the code in if only when it is submitted.
  4. It will always show the form (html code).
  5. what exactly is your question?

How to avoid the "divide by zero" error in SQL?

Filter out data in using a where clause so that you don't get 0 values.

#1214 - The used table type doesn't support FULLTEXT indexes

Before MySQL 5.6 Full-Text Search is supported only with MyISAM Engine.

Therefore either change the engine for your table to MyISAM

CREATE TABLE gamemech_chat (
  id bigint(20) unsigned NOT NULL auto_increment,
  from_userid varchar(50) NOT NULL default '0',
  to_userid varchar(50) NOT NULL default '0',
  text text NOT NULL,
  systemtext text NOT NULL,
  timestamp datetime NOT NULL default '0000-00-00 00:00:00',
  chatroom bigint(20) NOT NULL default '0',
  PRIMARY KEY  (id),
  KEY from_userid (from_userid),
  FULLTEXT KEY from_userid_2 (from_userid),
  KEY chatroom (chatroom),
  KEY timestamp (timestamp)
) ENGINE=MyISAM;

Here is SQLFiddle demo

or upgrade to 5.6 and use InnoDB Full-Text Search.

setup.py examples?

Minimal example

from setuptools import setup, find_packages


setup(
    name="foo",
    version="1.0",
    packages=find_packages(),
)

More info in docs

Converting HTML to PDF using PHP?

If you wish to create a pdf from php, pdflib will help you (as some others suggested).

Else, if you want to convert an HTML page to PDF via PHP, you'll find a little trouble outta here.. For 3 years I've been trying to do it as best as I can.

So, the options I know are:

DOMPDF : php class that wraps the html and builds the pdf. Works good, customizable (if you know php), based on pdflib, if I remember right it takes even some CSS. Bad news: slow when the html is big or complex.

HTML2PS: same as DOMPDF, but this one converts first to a .ps (ghostscript) file, then, to whatever format you need (pdf, jpg, png). For me is little better than dompdf, but has the same speed problem.. but, better compatibility with CSS.

Those two are php classes, but if you can install some software on the server, and access it throught passthru() or system(), give a look to these too:

wkhtmltopdf: based on webkit (safari's wrapper), is really fast and powerful.. seems like this is the best one (atm) for converting html pages to pdf on the fly; taking only 2 seconds for a 3 page xHTML document with CSS2. It is a recent project, anyway, the google.code page is often updated.

htmldoc : This one is a tank, it never really stops/crashes.. the project looks dead since 2007, but anyway if you don't need CSS compatibility this can be nice for you.

Conditional Replace Pandas

.ix indexer works okay for pandas version prior to 0.20.0, but since pandas 0.20.0, the .ix indexer is deprecated, so you should avoid using it. Instead, you can use .loc or iloc indexers. You can solve this problem by:

mask = df.my_channel > 20000
column_name = 'my_channel'
df.loc[mask, column_name] = 0

Or, in one line,

df.loc[df.my_channel > 20000, 'my_channel'] = 0

mask helps you to select the rows in which df.my_channel > 20000 is True, while df.loc[mask, column_name] = 0 sets the value 0 to the selected rows where maskholds in the column which name is column_name.

Update: In this case, you should use loc because if you use iloc, you will get a NotImplementedError telling you that iLocation based boolean indexing on an integer type is not available.

OperationalError: database is locked

Already lot of Answers are available here, even I want to share my case , this may help someone..

I have opened the connection in Python API to update values, I'll close connection only after receiving server response. Here what I did was I have opened connection to do some other operation in server as well before closing the connection in Python API.

qmake: could not find a Qt installation of ''

  • Install qt using:

    sudo apt install qt5-qmake
    
  • Open ~/.bashrc file:

    vim ~/.bashrc
    
  • Added the path below to the ~/.bashrc file:

    export PATH="/opt/Qt/5.15.1/gcc_64/bin/:$PATH"
    
  • Execute/load a ~/.bashrc file in your current shell

    source ~/.bashrc`
    
  • Try now qmake by using the version command below:

    qmake --version
    

Task<> does not contain a definition for 'GetAwaiter'

You have to install Microsoft.Bcl.Async NuGet package to be able to use async/await constructs in pre-.NET 4.5 versions (such as Silverlight 4.0+)

Just for clarity - this package used to be called Microsoft.CompilerServices.AsyncTargetingPack and some old tutorials still refer to it.

Take a look here for info from Immo Landwerth.

Passing a varchar full of comma delimited values to a SQL Server IN function

This works perfectly! The below answers are too complicated. Don't look at this as dynamic. Set up your store procedure as follows:

(@id as varchar(50))
as

Declare @query as nvarchar(max)
set @query ='
select * from table
where id in('+@id+')'
EXECUTE sp_executesql @query

youtube: link to display HD video by default

Nick Vogt at H3XED posted this syntax: https://www.youtube.com/v/VIDEOID?version=3&vq=hd1080

Take this link and replace the expression "VIDEOID" with the (shortened/shared) ID of the video.

Exapmple for ID: i3jNECZ3ybk looks like this: ... /v/i3jNECZ3ybk?version=3&vq=hd1080

What you get as a result is the standalone 1080p video but not in the Tube environment.

Mongoose's find method with $or condition does not work properly

async() => {
let body = await model.find().or([
  { name: 'something'},
  { nickname: 'somethang'}
]).exec();
console.log(body);
}
/* Gives an array of the searched query!
returns [] if not found */

Implementing multiple interfaces with Java - is there a way to delegate?

There's no pretty way. You might be able to use a proxy with the handler having the target methods and delegating everything else to them. Of course you'll have to use a factory because there'll be no constructor.

Android Google Maps v2 - set zoom level for myLocation

You can also use:

mMap.animateCamera( CameraUpdateFactory.zoomTo( 17.0f ) );    

To just change the zoom value to any desired value between minimum value=2.0 and maximum value=21.0.

The API warns that not all locations have tiles at values at or near maximum zoom.

See this for more information about zoom methods available in the CameraUpdateFactory.

How to change date format using jQuery?

var d = new Date();

var curr_date = d.getDate();

var curr_month = d.getMonth();

var curr_year = d.getFullYear();

curr_year = curr_year.toString().substr(2,2);

document.write(curr_date+"-"+curr_month+"-"+curr_year);

You can change this as your need..

How to check whether the user uploaded a file in PHP?

I checked your code and think you should try this:

if(!file_exists($_FILES['fileupload']['tmp_name']) || !is_uploaded_file($_FILES['fileupload']['tmp_name'])) 
    {
        echo 'No upload';
    }   
    else
        echo 'upload';

How to manage Angular2 "expression has changed after it was checked" exception when a component property depends on current datetime

As mentioned by @leocaseiro on github issue.

I found 3 solutions for those who are looking for easy fixes.

1) Moving from ngAfterViewInit to ngAfterContentInit

2) Moving to ngAfterViewChecked combined with ChangeDetectorRef as suggested on #14748 (comment)

3) Keep with ngOnInit() but call ChangeDetectorRef.detectChanges() after your changes.

Django - iterate number in for loop of a template

Also one can use this:

{% if forloop.first %}

or

{% if forloop.last %}

Is it not possible to define multiple constructors in Python?

Unlike Java, you cannot define multiple constructors. However, you can define a default value if one is not passed.

def __init__(self, city="Berlin"):
  self.city = city

Select a random sample of results from a query result

This in not a perfect answer but will get much better performance.

SELECT  *
FROM    (
    SELECT  *
    FROM    mytable sample (0.01)
    ORDER BY
            dbms_random.value
    )
WHERE rownum <= 1000

Sample will give you a percent of your actual table, if you really wanted a 1000 rows you would need to adjust that number. More often I just need an arbitrary number of rows anyway so I don't limit my results. On my database with 2 million rows I get 2 seconds vs 60 seconds.

select * from mytable sample (0.01)

NumPy first and last element from array

>>> test = [1,23,4,6,7,8]
>>> from itertools import izip_longest
>>> for e in izip_longest(test, reversed(test)):
    print e


(1, 8)
(23, 7)
(4, 6)
(6, 4)
(7, 23)
(8, 1)

Another option

>>> test = [1,23,4,6,7,8]
>>> start, end = iter(test), reversed(test)
>>> try:
    while True:
        print map(next, [start, end])
except StopIteration:
    pass

[1, 8]
[23, 7]
[4, 6]
[6, 4]
[7, 23]
[8, 1]

Importing text file into excel sheet

you can write .WorkbookConnection.Delete after .Refresh BackgroundQuery:=False this will delete text file external connection.

Print page numbers on pages when printing html

This is what you want:

@page {
   @bottom-right {
    content: counter(page) " of " counter(pages);
   }
}

Android ImageButton with a selected state?

The best way to do this without more images :

public static void buttonEffect(View button){
    button.setOnTouchListener(new OnTouchListener() {

        public boolean onTouch(View v, MotionEvent event) {
            switch (event.getAction()) {
                case MotionEvent.ACTION_DOWN: {
                    v.getBackground().setColorFilter(0xe0f47521,PorterDuff.Mode.SRC_ATOP);
                    v.invalidate();
                    break;
                }
                case MotionEvent.ACTION_UP: {
                    v.getBackground().clearColorFilter();
                    v.invalidate();
                    break;
                }
            }
            return false;
        }
    });
}

CSS table column autowidth

You could specify the width of all but the last table cells and add a table-layout:fixed and a width to the table.

You could set

table tr ul.actions {margin: 0; white-space:nowrap;}

(or set this for the last TD as Sander suggested instead).

This forces the inline-LIs not to break. Unfortunately this does not lead to a new width calculation in the containing UL (and this parent TD), and therefore does not autosize the last TD.

This means: if an inline element has no given width, a TD's width is always computed automatically first (if not specified). Then its inline content with this calculated width gets rendered and the white-space-property is applied, stretching its content beyond the calculated boundaries.

So I guess it's not possible without having an element within the last TD with a specific width.

Unable to copy a file from obj\Debug to bin\Debug

I can confirm this bug exists in VS 2012 Update 2 also.

My work-around is to:

  1. Clean Solution (and do nothing else)
  2. Close all open documents/files in the solution
  3. Exit VS 2012
  4. Run VS 2012
  5. Build Solution

I don't know if this is relevant or not, but my project uses "Linked" in class files from other projects - it's a Silverlight 5 project and the only way to share a class that is .NET and SL compatible is to link the files.

Something to consider ... look for linked files across projects in a single solution.

Comparing Class Types in Java

As said earlier, your code will work unless you have the same classes loaded on two different class loaders. This might happen in case you need multiple versions of the same class in memory at the same time, or you are doing some weird on the fly compilation stuff (as I am).

In this case, if you want to consider these as the same class (which might be reasonable depending on the case), you can match their names to compare them.

public static boolean areClassesQuiteTheSame(Class<?> c1, Class<?> c2) {
  // TODO handle nulls maybe?
  return c1.getCanonicalName().equals(c2.getCanonicalName());
}

Keep in mind that this comparison will do just what it does: compare class names; I don't think you will be able to cast from one version of a class to the other, and before looking into reflection, you might want to make sure there's a good reason for your classloader mess.

proper way to sudo over ssh

echo $VAR_REMOTEROOTPASS | ssh -tt -i $PATH_TO_KEY/id_mykey $VAR_REMOTEUSER@$varRemoteHost 
echo \"$varCommand\" | sudo bash

PSEXEC, access denied errors

The following worked, but only after I upgraded PSEXEC to 2.1 from Microsoft.

[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System] "LocalAccountTokenFilterPolicy"=dword:00000001 See: http://forum.sysinternals.com/topic10924.html

I had a slightly older version that didn't work. I used it to do some USMT work via Dell kace, worked a treat :)

How to Populate a DataTable from a Stored Procedure

You can use a SqlDataAdapter:

    SqlDataAdapter adapter = new SqlDataAdapter();
    SqlCommand cmd = new SqlCommand("usp_GetABCD", sqlcon);
    cmd.CommandType = CommandType.StoredProcedure;
    adapter.SelectCommand = cmd;
    DataTable dt = new DataTable();
    adapter.Fill(dt);

Adding a new line/break tag in XML

New Line XML

with XML

  1. Carriage return: &#xD;
  2. Line feed: &#xA;

or try like @dj_segfault proposed (see his answer) with CDATA;

 <![CDATA[Tootsie roll tiramisu macaroon wafer carrot cake.                       
            Danish topping sugar plum tart bonbon caramels cake.]]>