Programs & Examples On #Levels

Pandas Merging 101

This post will go through the following topics:

  • Merging with index under different conditions
    • options for index-based joins: merge, join, concat
    • merging on indexes
    • merging on index of one, column of other
  • effectively using named indexes to simplify merging syntax

BACK TO TOP



Index-based joins

TL;DR

There are a few options, some simpler than others depending on the use case.

  1. DataFrame.merge with left_index and right_index (or left_on and right_on using names indexes)
    • supports inner/left/right/full
    • can only join two at a time
    • supports column-column, index-column, index-index joins
  2. DataFrame.join (join on index)
    • supports inner/left (default)/right/full
    • can join multiple DataFrames at a time
    • supports index-index joins
  3. pd.concat (joins on index)
    • supports inner/full (default)
    • can join multiple DataFrames at a time
    • supports index-index joins

Index to index joins

Setup & Basics

import pandas as pd
import numpy as np

np.random.seed([3, 14])
left = pd.DataFrame(data={'value': np.random.randn(4)}, 
                    index=['A', 'B', 'C', 'D'])    
right = pd.DataFrame(data={'value': np.random.randn(4)},  
                     index=['B', 'D', 'E', 'F'])
left.index.name = right.index.name = 'idxkey'

left
           value
idxkey          
A      -0.602923
B      -0.402655
C       0.302329
D      -0.524349

right
 
           value
idxkey          
B       0.543843
D       0.013135
E      -0.326498
F       1.385076

Typically, an inner join on index would look like this:

left.merge(right, left_index=True, right_index=True)

         value_x   value_y
idxkey                    
B      -0.402655  0.543843
D      -0.524349  0.013135

Other joins follow similar syntax.

Notable Alternatives

  1. DataFrame.join defaults to joins on the index. DataFrame.join does a LEFT OUTER JOIN by default, so how='inner' is necessary here.

     left.join(right, how='inner', lsuffix='_x', rsuffix='_y')
    
              value_x   value_y
     idxkey                    
     B      -0.402655  0.543843
     D      -0.524349  0.013135
    

    Note that I needed to specify the lsuffix and rsuffix arguments since join would otherwise error out:

     left.join(right)
     ValueError: columns overlap but no suffix specified: Index(['value'], dtype='object')
    

    Since the column names are the same. This would not be a problem if they were differently named.

     left.rename(columns={'value':'leftvalue'}).join(right, how='inner')
    
             leftvalue     value
     idxkey                     
     B       -0.402655  0.543843
     D       -0.524349  0.013135
    
  2. pd.concat joins on the index and can join two or more DataFrames at once. It does a full outer join by default, so how='inner' is required here..

     pd.concat([left, right], axis=1, sort=False, join='inner')
    
                value     value
     idxkey                    
     B      -0.402655  0.543843
     D      -0.524349  0.013135
    

    For more information on concat, see this post.


Index to Column joins

To perform an inner join using index of left, column of right, you will use DataFrame.merge a combination of left_index=True and right_on=....

right2 = right.reset_index().rename({'idxkey' : 'colkey'}, axis=1)
right2
 
  colkey     value
0      B  0.543843
1      D  0.013135
2      E -0.326498
3      F  1.385076

left.merge(right2, left_index=True, right_on='colkey')

    value_x colkey   value_y
0 -0.402655      B  0.543843
1 -0.524349      D  0.013135

Other joins follow a similar structure. Note that only merge can perform index to column joins. You can join on multiple columns, provided the number of index levels on the left equals the number of columns on the right.

join and concat are not capable of mixed merges. You will need to set the index as a pre-step using DataFrame.set_index.


Effectively using Named Index [pandas >= 0.23]

If your index is named, then from pandas >= 0.23, DataFrame.merge allows you to specify the index name to on (or left_on and right_on as necessary).

left.merge(right, on='idxkey')

         value_x   value_y
idxkey                    
B      -0.402655  0.543843
D      -0.524349  0.013135

For the previous example of merging with the index of left, column of right, you can use left_on with the index name of left:

left.merge(right2, left_on='idxkey', right_on='colkey')

    value_x colkey   value_y
0 -0.402655      B  0.543843
1 -0.524349      D  0.013135


Continue Reading

Jump to other topics in Pandas Merging 101 to continue learning:

* you are here

Under which circumstances textAlign property works in Flutter?

textAlign property only works when there is a more space left for the Text's content. Below are 2 examples which shows when textAlign has impact and when not.


No impact

For instance, in this example, it won't have any impact because there is no extra space for the content of the Text.

Text(
  "Hello",
  textAlign: TextAlign.end, // no impact
),

enter image description here


Has impact

If you wrap it in a Container and provide extra width such that it has more extra space.

Container(
  width: 200,
  color: Colors.orange,
  child: Text(
    "Hello",
    textAlign: TextAlign.end, // has impact
  ),
)

enter image description here

Center Plot title in ggplot2

The ggeasy package has a function called easy_center_title() to do just that. I find it much more appealing than theme(plot.title = element_text(hjust = 0.5)) and it's so much easier to remember.

ggplot(data = dat, aes(time, total_bill, fill = time)) + 
  geom_bar(colour = "black", fill = "#DD8888", width = .8, stat = "identity") + 
  guides(fill = FALSE) +
  xlab("Time of day") +
  ylab("Total bill") +
  ggtitle("Average bill for 2 people") +
  ggeasy::easy_center_title()

enter image description here

Note that as of writing this answer you will need to install the development version of ggeasy from GitHub to use easy_center_title(). You can do so by running remotes::install_github("jonocarroll/ggeasy").

Failed to find target with hash string 'android-25'

You can open the SDK standalone by going to installation directory, just right click on the SDK Manager.exe and click on run as Administrator. i hope it will help.

Split / Explode a column of dictionaries into separate columns with pandas

my_df = pd.DataFrame.from_dict(my_dict, orient='index', columns=['my_col'])

.. would have parsed the dict properly (putting each dict key into a separate df column, and key values into df rows), so the dicts would not get squashed into a single column in the first place.

Global Events in Angular

My favorite way to do is by using behavior subject or event emitter (almost the same) in my service to control all my subcomponent.

Using angular cli, run ng g s to create a new service then use a BehaviorSubject or EventEmitter

export Class myService {
#all the stuff that must exist

myString: string[] = [];
contactChange : BehaviorSubject<string[]> = new BehaviorSubject(this.myString);

   getContacts(newContacts) {
     // get your data from a webservices & when you done simply next the value 
    this.contactChange.next(newContacts);
   }
}

When you do that every component using your service as a provider will be aware of the change. Simply subscribe to the result like you do with eventEmitter ;)

export Class myComp {
#all the stuff that exists like @Component + constructor using (private myService: myService)

this.myService.contactChange.subscribe((contacts) => {
     this.contactList += contacts; //run everytime next is called
  }
}

How are iloc and loc different?

iloc works based on integer positioning. So no matter what your row labels are, you can always, e.g., get the first row by doing

df.iloc[0]

or the last five rows by doing

df.iloc[-5:]

You can also use it on the columns. This retrieves the 3rd column:

df.iloc[:, 2]    # the : in the first position indicates all rows

You can combine them to get intersections of rows and columns:

df.iloc[:3, :3] # The upper-left 3 X 3 entries (assuming df has 3+ rows and columns)

On the other hand, .loc use named indices. Let's set up a data frame with strings as row and column labels:

df = pd.DataFrame(index=['a', 'b', 'c'], columns=['time', 'date', 'name'])

Then we can get the first row by

df.loc['a']     # equivalent to df.iloc[0]

and the second two rows of the 'date' column by

df.loc['b':, 'date']   # equivalent to df.iloc[1:, 1]

and so on. Now, it's probably worth pointing out that the default row and column indices for a DataFrame are integers from 0 and in this case iloc and loc would work in the same way. This is why your three examples are equivalent. If you had a non-numeric index such as strings or datetimes, df.loc[:5] would raise an error.

Also, you can do column retrieval just by using the data frame's __getitem__:

df['time']    # equivalent to df.loc[:, 'time']

Now suppose you want to mix position and named indexing, that is, indexing using names on rows and positions on columns (to clarify, I mean select from our data frame, rather than creating a data frame with strings in the row index and integers in the column index). This is where .ix comes in:

df.ix[:2, 'time']    # the first two rows of the 'time' column

I think it's also worth mentioning that you can pass boolean vectors to the loc method as well. For example:

 b = [True, False, True]
 df.loc[b] 

Will return the 1st and 3rd rows of df. This is equivalent to df[b] for selection, but it can also be used for assigning via boolean vectors:

df.loc[b, 'name'] = 'Mary', 'John'

Node JS Promise.all and forEach

Just to add to the solution presented, in my case I wanted to fetch multiple data from Firebase for a list of products. Here is how I did it:

useEffect(() => {
  const fn = p => firebase.firestore().doc(`products/${p.id}`).get();
  const actions = data.occasion.products.map(fn);
  const results = Promise.all(actions);
  results.then(data => {
    const newProducts = [];
    data.forEach(p => {
      newProducts.push({ id: p.id, ...p.data() });
    });
    setProducts(newProducts);
  });
}, [data]);

Activity, AppCompatActivity, FragmentActivity, and ActionBarActivity: When to Use Which?

There is a lot of confusion here, especially if you read outdated sources.

The basic one is Activity, which can show Fragments. You can use this combination if you're on Android version > 4.

However, there is also a support library which encompasses the other classes you mentioned: FragmentActivity, ActionBarActivity and AppCompat. Originally they were used to support fragments on Android versions < 4, but actually they're also used to backport functionality from newer versions of Android (material design for example).

The latest one is AppCompat, the other 2 are older. The strategy I use is to always use AppCompat, so that the app will be ready in case of backports from future versions of Android.

Generate random colors (RGB)

Here:

def random_color():
    rgbl=[255,0,0]
    random.shuffle(rgbl)
    return tuple(rgbl)

The result is either red, green or blue. The method is not applicable to other sets of colors though, where you'd have to build a list of all the colors you want to choose from and then use random.choice to pick one at random.

laravel the requested url was not found on this server

For Ubunutu 18.04 inside a vagrant box ... This is what helped me

  1. Ensure www-data has permissions to the .htaccess file

      
        sudo chown www-data.www-data .htaccess
      
    
  2. edit the apache2 conf to allow for symlinks etc

      
        sudo nano /etc/apache2/apache2.conf
      
    

    Add this to the file

_x000D_
_x000D_
<Directory /var/www/html/>_x000D_
  Options Indexes FollowSymLinks_x000D_
  AllowOverride all_x000D_
  Require all granted_x000D_
</Directory>
_x000D_
_x000D_
_x000D_

  1. Restart your apache2 server

    sudo service apache2 restart

I hope this helps someone.

Changing factor levels with dplyr mutate

I'm not quite sure I understand your question properly, but if you want to change the factor levels of cyl with mutate() you could do:

df <- mtcars %>% mutate(cyl = factor(cyl, levels = c(4, 6, 8)))

You would get:

#> str(df$cyl)
# Factor w/ 3 levels "4","6","8": 2 2 1 2 3 2 3 1 1 2 ...

python: get directory two levels up

For getting the directory 2 levels up:

 import os.path as path
 two_up = path.abspath(path.join(os.getcwd(),"../.."))

Export a list into a CSV or TXT file in R

Check out in here, worked well for me, with no limits in the output size, no omitted elements, even beyond 1000

Exporting large lists in R as .txt or .csv

How to convert data.frame column from Factor to numeric

As an alternative to $dollarsign notation, use a within block:

breast <- within(breast, {
  class <- as.numeric(as.character(class))
})

Note that you want to convert your vector to a character before converting it to a numeric. Simply calling as.numeric(class) will not the ids corresponding to each factor level (1, 2) rather than the levels themselves.

plot different color for different categorical levels using matplotlib

I had the same question, and have spent all day trying out different packages.

I had originally used matlibplot: and was not happy with either mapping categories to predefined colors; or grouping/aggregating then iterating through the groups (and still having to map colors). I just felt it was poor package implementation.

Seaborn wouldn't work on my case, and Altair ONLY works inside of a Jupyter Notebook.

The best solution for me was PlotNine, which "is an implementation of a grammar of graphics in Python, and based on ggplot2".

Below is the plotnine code to replicate your R example in Python:

from plotnine import *
from plotnine.data import diamonds

g = ggplot(diamonds, aes(x='carat', y='price', color='color')) + geom_point(stat='summary')
print(g)

plotnine diamonds example

So clean and simple :)

Azure SQL Database "DTU percentage" metric

Still not cool enough to comment, but regarding @vladislav's comment the original article was fairly old. Here is an update document regarding DTU's, which would help answer the OP's question.

https://docs.microsoft.com/en-us/azure/sql-database/sql-database-what-is-a-dtu

How to auto adjust the div size for all mobile / tablet display formats?

I don't have much time and your jsfidle did not work right now.
But maybe this will help you getting started.

First of all you should avoid to put css in your html tags. Like align="center".
Put stuff like that in your css since it is much clearer and won't deprecate that fast.

If you want to design responsive layouts you should use media queries wich were introduced in css3 and are supported very well by now.

Example css:

@media screen and (min-width: 100px) and (max-width: 199px)
{
    .button
    {
        width: 25px;
    }
}

@media screen and (min-width: 200px) and (max-width: 299px)
{
    .button
    {
        width: 50px;
    }
}

You can use any css you want inside a media query.
http://www.w3.org/TR/css3-mediaqueries/

upstream sent too big header while reading response header from upstream

Plesk instructions

I combined the top two answers here

In Plesk 12, I had nginx running as a reverse proxy (which I think is the default). So the current top answer doesn't work as nginx is also being run as a proxy.

I went to Subscriptions | [subscription domain] | Websites & Domains (tab) | [Virtual Host domain] | Web Server Settings.

Then at the bottom of that page you can set the Additional nginx directives which I set to be a combination of the top two answers here:

fastcgi_buffers         16  16k;
fastcgi_buffer_size         32k;
proxy_buffer_size          128k;
proxy_buffers            4 256k;
proxy_busy_buffers_size    256k;

rake assets:precompile RAILS_ENV=production not working as required

I found out that my back-up project worked well if I precompile without bundle update. Maybe something went wrong with gem updated but I don't know which gem has an error.

What is the difference between join and merge in Pandas?

pandas.merge() is the underlying function used for all merge/join behavior.

DataFrames provide the pandas.DataFrame.merge() and pandas.DataFrame.join() methods as a convenient way to access the capabilities of pandas.merge(). For example, df1.merge(right=df2, ...) is equivalent to pandas.merge(left=df1, right=df2, ...).

These are the main differences between df.join() and df.merge():

  1. lookup on right table: df1.join(df2) always joins via the index of df2, but df1.merge(df2) can join to one or more columns of df2 (default) or to the index of df2 (with right_index=True).
  2. lookup on left table: by default, df1.join(df2) uses the index of df1 and df1.merge(df2) uses column(s) of df1. That can be overridden by specifying df1.join(df2, on=key_or_keys) or df1.merge(df2, left_index=True).
  3. left vs inner join: df1.join(df2) does a left join by default (keeps all rows of df1), but df.merge does an inner join by default (returns only matching rows of df1 and df2).

So, the generic approach is to use pandas.merge(df1, df2) or df1.merge(df2). But for a number of common situations (keeping all rows of df1 and joining to an index in df2), you can save some typing by using df1.join(df2) instead.

Some notes on these issues from the documentation at http://pandas.pydata.org/pandas-docs/stable/merging.html#database-style-dataframe-joining-merging:

merge is a function in the pandas namespace, and it is also available as a DataFrame instance method, with the calling DataFrame being implicitly considered the left object in the join.

The related DataFrame.join method, uses merge internally for the index-on-index and index-on-column(s) joins, but joins on indexes by default rather than trying to join on common columns (the default behavior for merge). If you are joining on index, you may wish to use DataFrame.join to save yourself some typing.

...

These two function calls are completely equivalent:

left.join(right, on=key_or_keys)
pd.merge(left, right, left_on=key_or_keys, right_index=True, how='left', sort=False)

Confirm deletion using Bootstrap 3 modal box

<!-- Button trigger modal -->
<button type="button" class="btn btn-primary" data-toggle="modal" data-target="#exampleModal">
  Launch demo modal
</button>

<!-- Modal -->
<div class="modal fade" id="exampleModal" tabindex="-1" role="dialog" aria-labelledby="exampleModalLabel" aria-hidden="true">
  <div class="modal-dialog" role="document">
    <div class="modal-content">
      <div class="modal-header">
        <h5 class="modal-title" id="exampleModalLabel">Modal title</h5>
        <button type="button" class="close" data-dismiss="modal" aria-label="Close">
          <span aria-hidden="true">&times;</span>
        </button>
      </div>
      <div class="modal-body">
        ...
      </div>
      <div class="modal-footer">
        <button type="button" class="btn btn-secondary" data-dismiss="modal">Close</button>
        <button type="button" class="btn btn-primary">Save changes</button>
      </div>
    </div>
  </div>
</div>

https://getbootstrap.com/docs/4.0/components/modal/

How to extract Month from date in R

For some time now, you can also only rely on the data.table package and its IDate class plus associated functions. (Check ?as.IDate()). So, no need to additionally install lubridate.

require(data.table)

some_date <- c("01/02/1979", "03/04/1980")
month(as.IDate(some_date, '%d/%m/%Y')) # all data.table functions

How to reload the current state?

Probably the cleaner approach would be the following :

<a data-ui-sref="directory.organisations.details" data-ui-sref-opts="{reload: true}">Details State</a>

We can reload the state from the HTML only.

Make Frequency Histogram for Factor Variables

It seems like you want barplot(prop.table(table(animals))):

enter image description here

However, this is not a histogram.

Scope 'session' is not active for the current thread; IllegalStateException: No thread-bound request found

I fixed this issue by adding following code in my file.

@Component
@Scope(value = "session",  proxyMode = ScopedProxyMode.TARGET_CLASS)

XML configuration -

<listener>
        <listener-class>
            org.springframework.web.context.request.RequestContextListener 
        </listener-class>
</listener>

Above we can do using Java configuration -

@Configuration
@WebListener
public class MyRequestContextListener extends RequestContextListener {
}

How to add a RequestContextListener with no-xml configuration?

I am using spring version 5.1.4.RELEASE and no need to add below changes in pom.

<dependency>
    <groupId>cglib</groupId>
    <artifactId>cglib</artifactId>
    <version>3.2.10</version>
</dependency>

Retrieving values from nested JSON Object

You will have to iterate step by step into nested JSON.

for e.g a JSON received from Google geocoding api

{
   "results" : [
      {
         "address_components" : [
            {
               "long_name" : "Bhopal",
               "short_name" : "Bhopal",
               "types" : [ "locality", "political" ]
            },
            {
               "long_name" : "Bhopal",
               "short_name" : "Bhopal",
               "types" : [ "administrative_area_level_2", "political" ]
            },
            {
               "long_name" : "Madhya Pradesh",
               "short_name" : "MP",
               "types" : [ "administrative_area_level_1", "political" ]
            },
            {
               "long_name" : "India",
               "short_name" : "IN",
               "types" : [ "country", "political" ]
            }
         ],
         "formatted_address" : "Bhopal, Madhya Pradesh, India",
         "geometry" : {
            "bounds" : {
               "northeast" : {
                  "lat" : 23.3326697,
                  "lng" : 77.5748062
               },
               "southwest" : {
                  "lat" : 23.0661497,
                  "lng" : 77.2369767
               }
            },
            "location" : {
               "lat" : 23.2599333,
               "lng" : 77.412615
            },
            "location_type" : "APPROXIMATE",
            "viewport" : {
               "northeast" : {
                  "lat" : 23.3326697,
                  "lng" : 77.5748062
               },
               "southwest" : {
                  "lat" : 23.0661497,
                  "lng" : 77.2369767
               }
            }
         },
         "place_id" : "ChIJvY_Wj49CfDkR-NRy1RZXFQI",
         "types" : [ "locality", "political" ]
      }
   ],
   "status" : "OK"
}

I shall iterate in below given fashion to "location" : { "lat" : 23.2599333, "lng" : 77.412615

//recieve JSON in json object

        JSONObject json = new JSONObject(output.toString());
        JSONArray result = json.getJSONArray("results");
        JSONObject result1 = result.getJSONObject(0);
        JSONObject geometry = result1.getJSONObject("geometry");
        JSONObject locat = geometry.getJSONObject("location");

        //"iterate onto level of location";

        double lat = locat.getDouble("lat");
        double lng = locat.getDouble("lng");

best way to get folder and file list in Javascript

fs/promises and fs.Dirent

Here's an efficient, non-blocking ls program using Node's fast fs.Dirent objects and fs/promises module. This approach allows you to skip wasteful fs.exist or fs.stat calls on every path -

// main.js
import { readdir } from "fs/promises"
import { join } from "path"

async function* ls (path = ".")
{ yield path
  for (const dirent of await readdir(path, { withFileTypes: true }))
    if (dirent.isDirectory())
      yield* ls(join(path, dirent.name))
    else
      yield join(path, dirent.name)
}

async function* empty () {}

async function toArray (iter = empty())
{ let r = []
  for await (const x of iter)
    r.push(x)
  return r
}

toArray(ls(".")).then(console.log, console.error)

Let's get some sample files so we can see ls working -

$ yarn add immutable     # (just some example package)
$ node main.js
[
  '.',
  'main.js',
  'node_modules',
  'node_modules/.yarn-integrity',
  'node_modules/immutable',
  'node_modules/immutable/LICENSE',
  'node_modules/immutable/README.md',
  'node_modules/immutable/contrib',
  'node_modules/immutable/contrib/cursor',
  'node_modules/immutable/contrib/cursor/README.md',
  'node_modules/immutable/contrib/cursor/__tests__',
  'node_modules/immutable/contrib/cursor/__tests__/Cursor.ts.skip',
  'node_modules/immutable/contrib/cursor/index.d.ts',
  'node_modules/immutable/contrib/cursor/index.js',
  'node_modules/immutable/dist',
  'node_modules/immutable/dist/immutable-nonambient.d.ts',
  'node_modules/immutable/dist/immutable.d.ts',
  'node_modules/immutable/dist/immutable.es.js',
  'node_modules/immutable/dist/immutable.js',
  'node_modules/immutable/dist/immutable.js.flow',
  'node_modules/immutable/dist/immutable.min.js',
  'node_modules/immutable/package.json',
  'package.json',
  'yarn.lock'
]

For added explanation and other ways to leverage async generators, see this Q&A.

Convert all data frame character columns to factors

As @Raf Z commented on this question, dplyr now has mutate_if. Super useful, simple and readable.

> str(df)
'data.frame':   5 obs. of  5 variables:
 $ A: Factor w/ 5 levels "A","B","C","D",..: 1 2 3 4 5
 $ B: int  1 2 3 4 5
 $ C: logi  TRUE TRUE FALSE FALSE TRUE
 $ D: chr  "a" "b" "c" "d" ...
 $ E: chr  "A a" "B b" "C c" "D d" ...

> df <- df %>% mutate_if(is.character,as.factor)

> str(df)
'data.frame':   5 obs. of  5 variables:
 $ A: Factor w/ 5 levels "A","B","C","D",..: 1 2 3 4 5
 $ B: int  1 2 3 4 5
 $ C: logi  TRUE TRUE FALSE FALSE TRUE
 $ D: Factor w/ 5 levels "a","b","c","d",..: 1 2 3 4 5
 $ E: Factor w/ 5 levels "A a","B b","C c",..: 1 2 3 4 5

Html/PHP - Form - Input as array

in addition: for those who have a empty POST variable, don't use this:

name="[levels][level][]"

rather use this (as it is already here in this example):

name="levels[level][]"

Turn Pandas Multi-Index into column

I ran into Karl's issue as well. I just found myself renaming the aggregated column then resetting the index.

df = pd.DataFrame(df.groupby(['arms', 'success'])['success'].sum()).rename(columns={'success':'sum'})

enter image description here

df = df.reset_index()

enter image description here

Subset a dataframe by multiple factor levels

Try this:

> data[match(as.character(data$Code), selected, nomatch = FALSE), ]
    Code Value
1      A     1
2      B     2
1.1    A     1
1.2    A     1

Center/Set Zoom of Map to cover all visible Markers?

The size of array must be greater than zero. ?therwise you will have unexpected results.

function zoomeExtends(){
  var bounds = new google.maps.LatLngBounds();
  if (markers.length>0) { 
      for (var i = 0; i < markers.length; i++) {
         bounds.extend(markers[i].getPosition());
        }    
        myMap.fitBounds(bounds);
    }
}

Pandas Merge - How to avoid duplicating columns

I'm freshly new with Pandas but I wanted to achieve the same thing, automatically avoiding column names with _x or _y and removing duplicate data. I finally did it by using this answer and this one from Stackoverflow

sales.csv

    city;state;units
    Mendocino;CA;1
    Denver;CO;4
    Austin;TX;2

revenue.csv

    branch_id;city;revenue;state_id
    10;Austin;100;TX
    20;Austin;83;TX
    30;Austin;4;TX
    47;Austin;200;TX
    20;Denver;83;CO
    30;Springfield;4;I

merge.py import pandas

def drop_y(df):
    # list comprehension of the cols that end with '_y'
    to_drop = [x for x in df if x.endswith('_y')]
    df.drop(to_drop, axis=1, inplace=True)


sales = pandas.read_csv('data/sales.csv', delimiter=';')
revenue = pandas.read_csv('data/revenue.csv', delimiter=';')

result = pandas.merge(sales, revenue,  how='inner', left_on=['state'], right_on=['state_id'], suffixes=('', '_y'))
drop_y(result)
result.to_csv('results/output.csv', index=True, index_label='id', sep=';')

When executing the merge command I replace the _x suffix with an empty string and them I can remove columns ending with _y

output.csv

    id;city;state;units;branch_id;revenue;state_id
    0;Denver;CO;4;20;83;CO
    1;Austin;TX;2;10;100;TX
    2;Austin;TX;2;20;83;TX
    3;Austin;TX;2;30;4;TX
    4;Austin;TX;2;47;200;TX

How to limit the number of selected checkboxes?

I think we should use click instead change

Working DEMO

What is and how to fix System.TypeInitializationException error?

Whenever a TypeInitializationException is thrown, check all initialization logic of the type you are referring to for the first time in the statement where the exception is thrown - in your case: Logger.

Initialization logic includes: the type's static constructor (which - if I didn't miss it - you do not have for Logger) and field initialization.

Field initialization is pretty much "uncritical" in Logger except for the following lines:

private static string s_bstCommonAppData = Path.Combine(s_commonAppData, "XXXX");
private static string s_bstUserDataDir = Path.Combine(s_bstCommonAppData, "UserData");
private static string s_commonAppData = Environment.GetFolderPath(Environment.SpecialFolder.CommonApplicationData);

s_commonAppData is null at the point where Path.Combine(s_commonAppData, "XXXX"); is called. As far as I'm concerned, these initializations happen in the exact order you wrote them - so put s_commonAppData up by at least two lines ;)

Re-ordering factor levels in data frame

Assuming your dataframe is mydf:

mydf$task <- factor(mydf$task, levels = c("up", "down", "left", "right", "front", "back"))

Error in contrasts when defining a linear model in R

The answers by the other authors have already addressed the problem of factors with only one level or NAs.

Today, I stumbled upon the same error when using the rstatix::anova_test() function but my factors were okay (more than one level, no NAs, no character vectors, ...). Instead, I could fix the error by dropping all variables in the dataframe that are not included in the model. I don't know what's the reason for this behavior but just knowing about this might also be helpful when encountering this error.

Grouped bar plot in ggplot

First you need to get the counts for each category, i.e. how many Bads and Goods and so on are there for each group (Food, Music, People). This would be done like so:

raw <- read.csv("http://pastebin.com/raw.php?i=L8cEKcxS",sep=",")
raw[,2]<-factor(raw[,2],levels=c("Very Bad","Bad","Good","Very Good"),ordered=FALSE)
raw[,3]<-factor(raw[,3],levels=c("Very Bad","Bad","Good","Very Good"),ordered=FALSE)
raw[,4]<-factor(raw[,4],levels=c("Very Bad","Bad","Good","Very Good"),ordered=FALSE)

raw=raw[,c(2,3,4)] # getting rid of the "people" variable as I see no use for it

freq=table(col(raw), as.matrix(raw)) # get the counts of each factor level

Then you need to create a data frame out of it, melt it and plot it:

Names=c("Food","Music","People")     # create list of names
data=data.frame(cbind(freq),Names)   # combine them into a data frame
data=data[,c(5,3,1,2,4)]             # sort columns

# melt the data frame for plotting
data.m <- melt(data, id.vars='Names')

# plot everything
ggplot(data.m, aes(Names, value)) +   
  geom_bar(aes(fill = variable), position = "dodge", stat="identity")

Is this what you're after?

enter image description here

To clarify a little bit, in ggplot multiple grouping bar you had a data frame that looked like this:

> head(df)
  ID Type Annee X1PCE X2PCE X3PCE X4PCE X5PCE X6PCE
1  1    A  1980   450   338   154    36    13     9
2  2    A  2000   288   407   212    54    16    23
3  3    A  2020   196   434   246    68    19    36
4  4    B  1980   111   326   441    90    21    11
5  5    B  2000    63   298   443   133    42    21
6  6    B  2020    36   257   462   162    55    30

Since you have numerical values in columns 4-9, which would later be plotted on the y axis, this can be easily transformed with reshape and plotted.

For our current data set, we needed something similar, so we used freq=table(col(raw), as.matrix(raw)) to get this:

> data
   Names Very.Bad Bad Good Very.Good
1   Food        7   6    5         2
2  Music        5   5    7         3
3 People        6   3    7         4

Just imagine you have Very.Bad, Bad, Good and so on instead of X1PCE, X2PCE, X3PCE. See the similarity? But we needed to create such structure first. Hence the freq=table(col(raw), as.matrix(raw)).

IE8 issue with Twitter Bootstrap 3

You got your CSS from CDN (bootstrapcdn.com) respond.js only works for local files. So try your website on IE8 with a local copy of bootstrap.css. Or read: CDN/X-Domain Setup

Note See also: https://github.com/scottjehl/Respond/pull/206

Update:

Please read: http://getbootstrap.com/getting-started/#support

In addition, Internet Explorer 8 requires the use of respond.js to enable media query support.

See also: https://github.com/scottjehl/Respond

For this reason the basic template contains these lines in the head section:

<!-- HTML5 shim and Respond.js IE8 support of HTML5 elements and media queries -->
<!--[if lt IE 9]>
  <script src="../../assets/js/html5shiv.js"></script>
  <script src="../../assets/js/respond.min.js"></script>
<![endif]-->

How do I convert a factor into date format?

You were close. format= needs to be added to the as.Date call:

mydate <- factor("1/15/2006 0:00:00")
as.Date(mydate, format = "%m/%d/%Y")
## [1] "2006-01-15"

How to download an entire directory and subdirectories using wget?

you can also use this command :

wget --mirror -pc --convert-links -P ./your-local-dir/ http://www.your-website.com

so that you get the exact mirror of the website you want to download

Eliminating NAs from a ggplot

Just an update to the answer of @rafa.pereira. Since ggplot2 is part of tidyverse, it makes sense to use the convenient tidyverse functions to get rid of NAs.

library(tidyverse)
airquality %>% 
        drop_na(Ozone) %>%
        ggplot(aes(x = Ozone))+
        geom_bar(stat="bin")

Note that you can also use drop_na() without columns specification; then all the rows with NAs in any column will be removed.

Transaction isolation levels relation with locks on table

The locks are always taken at DB level:-

Oracle official Document:- To avoid conflicts during a transaction, a DBMS uses locks, mechanisms for blocking access by others to the data that is being accessed by the transaction. (Note that in auto-commit mode, where each statement is a transaction, locks are held for only one statement.) After a lock is set, it remains in force until the transaction is committed or rolled back. For example, a DBMS could lock a row of a table until updates to it have been committed. The effect of this lock would be to prevent a user from getting a dirty read, that is, reading a value before it is made permanent. (Accessing an updated value that has not been committed is considered a dirty read because it is possible for that value to be rolled back to its previous value. If you read a value that is later rolled back, you will have read an invalid value.)

How locks are set is determined by what is called a transaction isolation level, which can range from not supporting transactions at all to supporting transactions that enforce very strict access rules.

One example of a transaction isolation level is TRANSACTION_READ_COMMITTED, which will not allow a value to be accessed until after it has been committed. In other words, if the transaction isolation level is set to TRANSACTION_READ_COMMITTED, the DBMS does not allow dirty reads to occur. The interface Connection includes five values that represent the transaction isolation levels you can use in JDBC.

jQuery "on create" event for dynamically-created elements

As mentioned in several other answers, mutation events have been deprecated, so you should use MutationObserver instead. Since nobody has given any details on that yet, here it goes...

Basic JavaScript API

The API for MutationObserver is fairly simple. It's not quite as simple as the mutation events, but it's still okay.

_x000D_
_x000D_
function callback(records) {_x000D_
  records.forEach(function (record) {_x000D_
    var list = record.addedNodes;_x000D_
    var i = list.length - 1;_x000D_
    _x000D_
 for ( ; i > -1; i-- ) {_x000D_
   if (list[i].nodeName === 'SELECT') {_x000D_
     // Insert code here..._x000D_
     console.log(list[i]);_x000D_
   }_x000D_
 }_x000D_
  });_x000D_
}_x000D_
_x000D_
var observer = new MutationObserver(callback);_x000D_
_x000D_
var targetNode = document.body;_x000D_
_x000D_
observer.observe(targetNode, { childList: true, subtree: true });
_x000D_
<script>_x000D_
  // For testing_x000D_
  setTimeout(function() {_x000D_
    var $el = document.createElement('select');_x000D_
    document.body.appendChild($el);_x000D_
  }, 500);_x000D_
</script>
_x000D_
_x000D_
_x000D_

Let's break that down.

var observer = new MutationObserver(callback);

This creates the observer. The observer isn't watching anything yet; this is just where the event listener gets attached.

observer.observe(targetNode, { childList: true, subtree: true });

This makes the observer start up. The first argument is the node that the observer will watch for changes on. The second argument is the options for what to watch for.

  • childList means I want to watch for child elements being added or removed.
  • subtree is a modifier that extends childList to watch for changes anywhere in this element's subtree (otherwise, it would just look at changes directly within targetNode).

The other two main options besides childList are attributes and characterData, which mean about what they sound like. You must use one of those three.

function callback(records) {
  records.forEach(function (record) {

Things get a little tricky inside the callback. The callback receives an array of MutationRecords. Each MutationRecord can describe several changes of one type (childList, attributes, or characterData). Since I only told the observer to watch for childList, I won't bother checking the type.

var list = record.addedNodes;

Right here I grab a NodeList of all the child nodes that were added. This will be empty for all the records where nodes aren't added (and there may be many such records).

From there on, I loop through the added nodes and find any that are <select> elements.

Nothing really complex here.

jQuery

...but you asked for jQuery. Fine.

(function($) {

  var observers = [];

  $.event.special.domNodeInserted = {

    setup: function setup(data, namespaces) {
      var observer = new MutationObserver(checkObservers);

      observers.push([this, observer, []]);
    },

    teardown: function teardown(namespaces) {
      var obs = getObserverData(this);

      obs[1].disconnect();

      observers = $.grep(observers, function(item) {
        return item !== obs;
      });
    },

    remove: function remove(handleObj) {
      var obs = getObserverData(this);

      obs[2] = obs[2].filter(function(event) {
        return event[0] !== handleObj.selector && event[1] !== handleObj.handler;
      });
    },

    add: function add(handleObj) {
      var obs = getObserverData(this);

      var opts = $.extend({}, {
        childList: true,
        subtree: true
      }, handleObj.data);

      obs[1].observe(this, opts);

      obs[2].push([handleObj.selector, handleObj.handler]);
    }
  };

  function getObserverData(element) {
    var $el = $(element);

    return $.grep(observers, function(item) {
      return $el.is(item[0]);
    })[0];
  }

  function checkObservers(records, observer) {
    var obs = $.grep(observers, function(item) {
      return item[1] === observer;
    })[0];

    var triggers = obs[2];

    var changes = [];

    records.forEach(function(record) {
      if (record.type === 'attributes') {
        if (changes.indexOf(record.target) === -1) {
          changes.push(record.target);
        }

        return;
      }

      $(record.addedNodes).toArray().forEach(function(el) {
        if (changes.indexOf(el) === -1) {
          changes.push(el);
        }
      })
    });

    triggers.forEach(function checkTrigger(item) {
      changes.forEach(function(el) {
        var $el = $(el);

        if ($el.is(item[0])) {
          $el.trigger('domNodeInserted');
        }
      });
    });
  }

})(jQuery);

This creates a new event called domNodeInserted, using the jQuery special events API. You can use it like so:

$(document).on("domNodeInserted", "select", function () {
  $(this).combobox();
});

I would personally suggest looking for a class because some libraries will create select elements for testing purposes.

Naturally, you can also use .off("domNodeInserted", ...) or fine-tune the watching by passing in data like this:

$(document.body).on("domNodeInserted", "select.test", {
  attributes: true,
  subtree: false
}, function () {
  $(this).combobox();
});

This would trigger checking for the appearance of a select.test element whenever attributes changed for elements directly inside the body.

You can see it live below or on jsFiddle.

_x000D_
_x000D_
(function($) {_x000D_
  $(document).on("domNodeInserted", "select", function() {_x000D_
    console.log(this);_x000D_
    //$(this).combobox();_x000D_
  });_x000D_
})(jQuery);
_x000D_
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>_x000D_
_x000D_
<script>_x000D_
  // For testing_x000D_
  setTimeout(function() {_x000D_
    var $el = document.createElement('select');_x000D_
    document.body.appendChild($el);_x000D_
  }, 500);_x000D_
</script>_x000D_
_x000D_
<script>_x000D_
  (function($) {_x000D_
_x000D_
    var observers = [];_x000D_
_x000D_
    $.event.special.domNodeInserted = {_x000D_
_x000D_
      setup: function setup(data, namespaces) {_x000D_
        var observer = new MutationObserver(checkObservers);_x000D_
_x000D_
        observers.push([this, observer, []]);_x000D_
      },_x000D_
_x000D_
      teardown: function teardown(namespaces) {_x000D_
        var obs = getObserverData(this);_x000D_
_x000D_
        obs[1].disconnect();_x000D_
_x000D_
        observers = $.grep(observers, function(item) {_x000D_
          return item !== obs;_x000D_
        });_x000D_
      },_x000D_
_x000D_
      remove: function remove(handleObj) {_x000D_
        var obs = getObserverData(this);_x000D_
_x000D_
        obs[2] = obs[2].filter(function(event) {_x000D_
          return event[0] !== handleObj.selector && event[1] !== handleObj.handler;_x000D_
        });_x000D_
      },_x000D_
_x000D_
      add: function add(handleObj) {_x000D_
        var obs = getObserverData(this);_x000D_
_x000D_
        var opts = $.extend({}, {_x000D_
          childList: true,_x000D_
          subtree: true_x000D_
        }, handleObj.data);_x000D_
_x000D_
        obs[1].observe(this, opts);_x000D_
_x000D_
        obs[2].push([handleObj.selector, handleObj.handler]);_x000D_
      }_x000D_
    };_x000D_
_x000D_
    function getObserverData(element) {_x000D_
      var $el = $(element);_x000D_
_x000D_
      return $.grep(observers, function(item) {_x000D_
        return $el.is(item[0]);_x000D_
      })[0];_x000D_
    }_x000D_
_x000D_
    function checkObservers(records, observer) {_x000D_
      var obs = $.grep(observers, function(item) {_x000D_
        return item[1] === observer;_x000D_
      })[0];_x000D_
_x000D_
      var triggers = obs[2];_x000D_
_x000D_
      var changes = [];_x000D_
_x000D_
      records.forEach(function(record) {_x000D_
        if (record.type === 'attributes') {_x000D_
          if (changes.indexOf(record.target) === -1) {_x000D_
            changes.push(record.target);_x000D_
          }_x000D_
_x000D_
          return;_x000D_
        }_x000D_
_x000D_
        $(record.addedNodes).toArray().forEach(function(el) {_x000D_
          if (changes.indexOf(el) === -1) {_x000D_
            changes.push(el);_x000D_
          }_x000D_
        })_x000D_
      });_x000D_
_x000D_
      triggers.forEach(function checkTrigger(item) {_x000D_
        changes.forEach(function(el) {_x000D_
          var $el = $(el);_x000D_
_x000D_
          if ($el.is(item[0])) {_x000D_
            $el.trigger('domNodeInserted');_x000D_
          }_x000D_
        });_x000D_
      });_x000D_
    }_x000D_
_x000D_
  })(jQuery);_x000D_
</script>
_x000D_
_x000D_
_x000D_


Note

This jQuery code is a fairly basic implementation. It does not trigger in cases where modifications elsewhere make your selector valid.

For example, suppose your selector is .test select and the document already has a <select>. Adding the class test to <body> will make the selector valid, but because I only check record.target and record.addedNodes, the event would not fire. The change has to happen to the element you wish to select itself.

This could be avoided by querying for the selector whenever mutations happen. I chose not to do that to avoid causing duplicate events for elements that had already been handled. Properly dealing with adjacent or general sibling combinators would make things even trickier.

For a more comprehensive solution, see https://github.com/pie6k/jquery.initialize, as mentioned in Damien Ó Ceallaigh's answer. However, the author of that library has announced that the library is old and suggests that you shouldn't use jQuery for this.

How to change line width in ggplot?

Line width in ggplot2 can be changed with argument lwd= in geom_line().

geom_line(aes(x=..., y=..., color=...), lwd=1.5)

Git add all subdirectories

I can't say for sure if this is the case, but what appeared to be a problem for me was having .gitignore files in some of the subdirectories. Again, I can't guarantee this, but everything worked after these were deleted.

Access elements in json object like an array

I found a straight forward way of solving this, with the use of JSON.parse.

Let's assume the json below is inside the variable jsontext.

[
  ["Blankaholm", "Gamleby"],
  ["2012-10-23", "2012-10-22"],
  ["Blankaholm. Under natten har det varit inbrott", "E22 i med Gamleby. Singelolycka. En bilist har.],
  ["57.586174","16.521841"], ["57.893162","16.406090"]
]

The solution is this:

var parsedData = JSON.parse(jsontext);

Now I can access the elements the following way:

var cities = parsedData[0];

Fixing the order of facets in ggplot

There are a couple of good solutions here.

Similar to the answer from Harpal, but within the facet, so doesn't require any change to underlying data or pre-plotting manipulation:

# Change this code:
facet_grid(.~size) + 
# To this code:
facet_grid(~factor(size, levels=c('50%','100%','150%','200%')))

This is flexible, and can be implemented for any variable as you change what element is faceted, no underlying change in the data required.

Multiple left joins on multiple tables in one query

This kind of query should work - after rewriting with explicit JOIN syntax:

SELECT something
FROM   master      parent
JOIN   master      child ON child.parent_id = parent.id
LEFT   JOIN second parentdata ON parentdata.id = parent.secondary_id
LEFT   JOIN second childdata ON childdata.id = child.secondary_id
WHERE  parent.parent_id = 'rootID'

The tripping wire here is that an explicit JOIN binds before "old style" CROSS JOIN with comma (,). I quote the manual here:

In any case JOIN binds more tightly than the commas separating FROM-list items.

After rewriting the first, all joins are applied left-to-right (logically - Postgres is free to rearrange tables in the query plan otherwise) and it works.

Just to make my point, this would work, too:

SELECT something
FROM   master parent
LEFT   JOIN second parentdata ON parentdata.id = parent.secondary_id
,      master child
LEFT   JOIN second childdata ON childdata.id = child.secondary_id
WHERE  child.parent_id = parent.id
AND    parent.parent_id = 'rootID'

But explicit JOIN syntax is generally preferable, as your case illustrates once again.

And be aware that multiple (LEFT) JOIN can multiply rows:

Postgres could not connect to server

Psql option

-h hostname --host=hostname

: Specifies the host name of the machine on which the server is running. If the value begins with a slash, it is used as the directory for the Unix-domain socket.

$ grep "port\|unix_socket" /etc/postgresql/9.1/main/postgresql.conf
port = 5433                                         # (change requires restart)
unix_socket_directory = '/var/run/postgresql'       # (change requires resta

$ netstat -nalp | grep postgres
unix  2      [ ACC ]     STREAM     LISTENING     106753   4349/postgres       /tmp/.s.PGSQL.5432
unix  2      [ ACC ]     STREAM     LISTENING     10377 1031/postgres       /var/run/postgresql/.s.PGSQL.5433

Run psql with -host Option

$ psql -p 5433 -h /var/run/postgresql

No need to make a soft link

ASP.NET MVC 4 Custom Authorize Attribute with Permission Codes (without roles)

If you use the WEB API with Claims, you can use this:

[AttributeUsage(AttributeTargets.Method | AttributeTargets.Class, Inherited = true, AllowMultiple = true)]
public class AutorizeCompanyAttribute:  AuthorizationFilterAttribute
{
    public string Company { get; set; }

    public override void OnAuthorization(HttpActionContext actionContext)
    {
        var claims = ((ClaimsIdentity)Thread.CurrentPrincipal.Identity);
        var claim = claims.Claims.Where(x => x.Type == "Company").FirstOrDefault();

        string privilegeLevels = string.Join("", claim.Value);        

        if (privilegeLevels.Contains(this.Company)==false)
        {
            actionContext.Response = actionContext.Request.CreateResponse(HttpStatusCode.Unauthorized, "Usuario de Empresa No Autorizado");
        }
    }
}
[HttpGet]
[AutorizeCompany(Company = "MyCompany")]
[Authorize(Roles ="SuperAdmin")]
public IEnumerable MyAction()
{....
}

Limit Get-ChildItem recursion depth

This is a function that outputs one line per item, with indentation according to depth level. It is probably much more readable.

function GetDirs($path = $pwd, [Byte]$ToDepth = 255, [Byte]$CurrentDepth = 0)
{
    $CurrentDepth++
    If ($CurrentDepth -le $ToDepth) {
        foreach ($item in Get-ChildItem $path)
        {
            if (Test-Path $item.FullName -PathType Container)
            {
                "." * $CurrentDepth + $item.FullName
                GetDirs $item.FullName -ToDepth $ToDepth -CurrentDepth $CurrentDepth
            }
        }
    }
}

It is based on a blog post, Practical PowerShell: Pruning File Trees and Extending Cmdlets.

Calling a class function inside of __init__

Call the function in this way:

self.parse_file()

You also need to define your parse_file() function like this:

def parse_file(self):

The parse_file method has to be bound to an object upon calling it (because it's not a static method). This is done by calling the function on an instance of the object, in your case the instance is self.

How to sort by dates excel?

It's actually really easy. Highlight the DATE column and make sure that its set as date in Excel. Highlight everything you want to change, Then go to [DATA]>[SORT]>[COLUMN] and set sorting by date. Hope it helps.

Replace contents of factor column in R dataframe

I had the same problem. This worked better:

Identify which level you want to modify: levels(iris$Species)

    "setosa" "versicolor" "virginica" 

So, setosa is the first.

Then, write this:

     levels(iris$Species)[1] <-"new name"

Eclipse: Error ".. overlaps the location of another project.." when trying to create new project

Eclipse is erroring because if you try and create a project on a directory that exists, Eclipse doesn't know if it's an actual project or not - so it errors, saving you from losing work!

So you have two solutions:

  1. Move the folder counter_src somewhere else, then create the project (which will create the directory), then import the source files back into the newly created counter_src.

  2. Right-click on the project explorer and import an existing project, select C:\Users\Martin\Java\Counter\ as your root directory. If Eclipse sees a project, you will be able to import it.

Google Maps API v3 marker with label

the above solutions wont work on ipad-2

recently I had an safari browser crash issue while plotting the markers even if there are less number of markers. Initially I was using marker with label (markerwithlabel.js) library for plotting the marker , when i use google native marker it was working fine even with large number of markers but i want customized markers , so i refer the above solution given by jonathan but still the crashing issue is not resolved after doing lot of research i came to know about http://nickjohnson.com/b/google-maps-v3-how-to-quickly-add-many-markers this blog and now my map search is working smoothly on ipad-2 :)

How to output a multiline string in Bash?

Inspired by the insightful answers on this page, I created a mixed approach, which I consider the simplest and more flexible one. What do you think?

First, I define the usage in a variable, which allows me to reuse it in different contexts. The format is very simple, almost WYSIWYG, without the need to add any control characters. This seems reasonably portable to me (I ran it on MacOS and Ubuntu)

__usage="
Usage: $(basename $0) [OPTIONS]

Options:
  -l, --level <n>              Something something something level
  -n, --nnnnn <levels>         Something something something n
  -h, --help                   Something something something help
  -v, --version                Something something something version
"

Then I can simply use it as

echo "$__usage"

or even better, when parsing parameters, I can just echo it there in a one-liner:

levelN=${2:?"--level: n is required!""${__usage}"}

Entity Framework - Include Multiple Levels of Properties

I also had to use multiple includes and at 3rd level I needed multiple properties

(from e in context.JobCategorySet
                      where e.Id == id &&
                            e.AgencyId == agencyId
                      select e)
                      .Include(x => x.JobCategorySkillDetails)
                      .Include(x => x.Shifts.Select(r => r.Rate).Select(rt => rt.DurationType))
                      .Include(x => x.Shifts.Select(r => r.Rate).Select(rt => rt.RuleType))
                      .Include(x => x.Shifts.Select(r => r.Rate).Select(rt => rt.RateType))
                      .FirstOrDefaultAsync();

This may help someone :)

Best way to compare two complex objects

Thanks to the example of Jonathan. I expanded it for all cases (arrays, lists, dictionaries, primitive types).

This is a comparison without serialization and does not require the implementation of any interfaces for compared objects.

        /// <summary>Returns description of difference or empty value if equal</summary>
        public static string Compare(object obj1, object obj2, string path = "")
        {
            string path1 = string.IsNullOrEmpty(path) ? "" : path + ": ";
            if (obj1 == null && obj2 != null)
                return path1 + "null != not null";
            else if (obj2 == null && obj1 != null)
                return path1 + "not null != null";
            else if (obj1 == null && obj2 == null)
                return null;

            if (!obj1.GetType().Equals(obj2.GetType()))
                return "different types: " + obj1.GetType() + " and " + obj2.GetType();

            Type type = obj1.GetType();
            if (path == "")
                path = type.Name;

            if (type.IsPrimitive || typeof(string).Equals(type))
            {
                if (!obj1.Equals(obj2))
                    return path1 + "'" + obj1 + "' != '" + obj2 + "'";
                return null;
            }
            if (type.IsArray)
            {
                Array first = obj1 as Array;
                Array second = obj2 as Array;
                if (first.Length != second.Length)
                    return path1 + "array size differs (" + first.Length + " vs " + second.Length + ")";

                var en = first.GetEnumerator();
                int i = 0;
                while (en.MoveNext())
                {
                    string res = Compare(en.Current, second.GetValue(i), path);
                    if (res != null)
                        return res + " (Index " + i + ")";
                    i++;
                }
            }
            else if (typeof(System.Collections.IEnumerable).IsAssignableFrom(type))
            {
                System.Collections.IEnumerable first = obj1 as System.Collections.IEnumerable;
                System.Collections.IEnumerable second = obj2 as System.Collections.IEnumerable;

                var en = first.GetEnumerator();
                var en2 = second.GetEnumerator();
                int i = 0;
                while (en.MoveNext())
                {
                    if (!en2.MoveNext())
                        return path + ": enumerable size differs";

                    string res = Compare(en.Current, en2.Current, path);
                    if (res != null)
                        return res + " (Index " + i + ")";
                    i++;
                }
            }
            else
            {
                foreach (PropertyInfo pi in type.GetProperties(BindingFlags.NonPublic | BindingFlags.Instance | BindingFlags.Public))
                {
                    try
                    {
                        var val = pi.GetValue(obj1);
                        var tval = pi.GetValue(obj2);
                        if (path.EndsWith("." + pi.Name))
                            return null;
                        var pathNew = (path.Length == 0 ? "" : path + ".") + pi.Name;
                        string res = Compare(val, tval, pathNew);
                        if (res != null)
                            return res;
                    }
                    catch (TargetParameterCountException)
                    {
                        //index property
                    }
                }
                foreach (FieldInfo fi in type.GetFields(BindingFlags.NonPublic | BindingFlags.Instance | BindingFlags.Public))
                {
                    var val = fi.GetValue(obj1);
                    var tval = fi.GetValue(obj2);
                    if (path.EndsWith("." + fi.Name))
                        return null;
                    var pathNew = (path.Length == 0 ? "" : path + ".") + fi.Name;
                    string res = Compare(val, tval, pathNew);
                    if (res != null)
                        return res;
                }
            }
            return null;
        }

For easy copying of the code created repository

adding x and y axis labels in ggplot2

since the data ex1221new was not given, so I have created a dummy data and added it to a data frame. Also, the question which was asked has few changes in codes like then ggplot package has deprecated the use of

"scale_area()" and nows uses scale_size_area()
"opts()" has changed to theme()

In my answer,I have stored the plot in mygraph variable and then I have used

mygraph$labels$x="Discharge of materials" #changes x axis title
       mygraph$labels$y="Area Affected" # changes y axis title

And the work is done. Below is the complete answer.

install.packages("Sleuth2")
library(Sleuth2)
library(ggplot2)

ex1221new<-data.frame(Discharge<-c(100:109),Area<-c(120:129),NO3<-seq(2,5,length.out = 10))
discharge<-ex1221new$Discharge
area<-ex1221new$Area
nitrogen<-ex1221new$NO3
p <- ggplot(ex1221new, aes(discharge, area), main="Point")
mygraph<-p + geom_point(aes(size= nitrogen)) + 
  scale_size_area() + ggtitle("Weighted Scatterplot of Watershed Area vs. Discharge and Nitrogen Levels (PPM)")+
theme(
 plot.title =  element_text(color="Blue", size=30, hjust = 0.5), 

 # change the styling of both the axis simultaneously from this-
 axis.title = element_text(color = "Green", size = 20, family="Courier",)
 

   # you can change the  axis title from the code below
   mygraph$labels$x="Discharge of materials" #changes x axis title
   mygraph$labels$y="Area Affected" # changes y axis title
   mygraph



   

Also, you can change the labels title from the same formula used above -

mygraph$labels$size= "N2" #size contains the nitrogen level 

Best way to get child nodes

Don't let white space fool you. Just test this in a console browser.

Use native javascript. Here is and example with two 'ul' sets with the same class. You don't need to have your 'ul' list all in one line to avoid white space just use your array count to jump over white space.

How to get around white space with querySelector() then childNodes[] js fiddle link: https://jsfiddle.net/aparadise/56njekdo/

var y = document.querySelector('.list');
var myNode = y.childNodes[11].style.backgroundColor='red';

<ul class="list">
    <li>8</li>
    <li>9</li>
    <li>100</li>
</ul>

<ul class="list">
    <li>ABC</li>
    <li>DEF</li>
    <li>XYZ</li>
</ul>

Rails: FATAL - Peer authentication failed for user (PG::Error)

If you get that error message (Peer authentication failed for user (PG::Error)) when running unit tests, make sure the test database exists.

How can building a heap be O(n) time complexity?

Your analysis is correct. However, it is not tight.

It is not really easy to explain why building a heap is a linear operation, you should better read it.

A great analysis of the algorithm can be seen here.


The main idea is that in the build_heap algorithm the actual heapify cost is not O(log n)for all elements.

When heapify is called, the running time depends on how far an element might move down in tree before the process terminates. In other words, it depends on the height of the element in the heap. In the worst case, the element might go down all the way to the leaf level.

Let us count the work done level by level.

At the bottommost level, there are 2^(h)nodes, but we do not call heapify on any of these, so the work is 0. At the next to level there are 2^(h - 1) nodes, and each might move down by 1 level. At the 3rd level from the bottom, there are 2^(h - 2) nodes, and each might move down by 2 levels.

As you can see not all heapify operations are O(log n), this is why you are getting O(n).

How to get the Parent's parent directory in Powershell?

To extrapolate a bit on the other answers (in as Beginner-friendly a way as possible):

  • String objects that point to valid paths can be converted to DirectoryInfo/FileInfo objects via functions like Get-Item and Get-ChildItem.
  • .Parent can only be used on a DirectoryInfo object.
  • .Directory converts FileInfo object to a DirectoryInfo object, and will return null if used on any other type (even another DirectoryInfo object).
  • .DirectoryName converts a FileInfo object to a String object, and will return null if used on any other type (even another String object).
  • .FullName converts a DirectoryInfo/FileInfo object to a String object, and will return null if used on any other type (even another DirectoryInfo/FileInfo object).

Check the object type with the GetType Method to see what you're working with: $scriptPath.GetType()

Lastly, a quick tip that helps with making one-liners: Get-Item has the gi alias and Get-ChildItem has the gci alias.

ssh "permissions are too open" error

Putty can do the work on windows 10. It generates a public key using a private key as input.

  1. download putty from https://www.putty.org/
  2. install putty. Two applications come upon the installation: putty config, putty key gen
  3. launch puttyGen
  4. click load and select a private key file. Please note, you need to rename your private key file with .ppk extension,e.g private-key.ppk enter image description here

execJs: 'Could not find a JavaScript runtime' but execjs AND therubyracer are in Gemfile

Ubuntu Users:

I had the same problem and I fixed it by installing nodejson my system independent of the gem.

on ubuntu its: sudo apt-get install nodejs

I'm using 64bit ubuntu 11.10

update: From @Galina 's answer below I'm guessing that the latest version of nodejs is required, so @steve98177 your best option on a redhat(or CentOS) box is to install from source code as @Galina did, but as you can't "make/install" on this box ?, I suggest you try to install a fedora rpm(long shot) https://github.com/joyent/node/wiki/Installing-Node.js-via-package-manager or find another RH/CentOs box(that you can 'make' on) and create your own rpm and install on original RH box(if old glibc on RH plays nice).

The real issue here(IMHO) is installing Gems that have dependencies on installed packages outside of the ruby environment, is there a way of knowing before installing ? an RFI for Gems or bundler ?


CentOS/RedHat Users:

sudo yum install nodejs

Setting up Vim for Python

Under Linux, What worked for me was John Anderson's (sontek) guide, which you can find at this link. However, I cheated and just used his easy configuration setup from his Git repostiory:

git clone -b vim https://github.com/sontek/dotfiles.git

cd dotfiles

./install.sh vim

His configuration is fairly up to date as of today.

ruby LoadError: cannot load such file

I just came across a similar problem. Try

require './st.rb'

This should do the trick.

Fatal error: Can't open and lock privilege tables: Table 'mysql.host' doesn't exist

initialize mysql before start on windows.

mysqld --initialize

Predict() - Maybe I'm not understanding it

First, you want to use

model <- lm(Total ~ Coupon, data=df)

not model <-lm(df$Total ~ df$Coupon, data=df).

Second, by saying lm(Total ~ Coupon), you are fitting a model that uses Total as the response variable, with Coupon as the predictor. That is, your model is of the form Total = a + b*Coupon, with a and b the coefficients to be estimated. Note that the response goes on the left side of the ~, and the predictor(s) on the right.

Because of this, when you ask R to give you predicted values for the model, you have to provide a set of new predictor values, ie new values of Coupon, not Total.

Third, judging by your specification of newdata, it looks like you're actually after a model to fit Coupon as a function of Total, not the other way around. To do this:

model <- lm(Coupon ~ Total, data=df)
new.df <- data.frame(Total=c(79037022, 83100656, 104299800))
predict(model, new.df)

R: numeric 'envir' arg not of length one in predict()

There are several problems here:

  1. The newdata argument of predict() needs a predictor variable. You should thus pass it values for Coupon, instead of Total, which is the response variable in your model.

  2. The predictor variable needs to be passed in as a named column in a data frame, so that predict() knows what the numbers its been handed represent. (The need for this becomes clear when you consider more complicated models, having more than one predictor variable).

  3. For this to work, your original call should pass df in through the data argument, rather than using it directly in your formula. (This way, the name of the column in newdata will be able to match the name on the RHS of the formula).

With those changes incorporated, this will work:

model <- lm(Total ~ Coupon, data=df)
new <- data.frame(Coupon = df$Coupon)
predict(model, newdata = new, interval="confidence")

How to compare two colors for similarity/difference

Kotlin version with how much percent do you want to match.

Method call with percent optional argument

isMatchingColor(intColor1, intColor2, 95) // should match color if 95% similar

Method body

private fun isMatchingColor(intColor1: Int, intColor2: Int, percent: Int = 90): Boolean {
    val threadSold = 255 - (255 / 100f * percent)

    val diffAlpha = abs(Color.alpha(intColor1) - Color.alpha(intColor2))
    val diffRed = abs(Color.red(intColor1) - Color.red(intColor2))
    val diffGreen = abs(Color.green(intColor1) - Color.green(intColor2))
    val diffBlue = abs(Color.blue(intColor1) - Color.blue(intColor2))

    if (diffAlpha > threadSold) {
        return false
    }

    if (diffRed > threadSold) {
        return false
    }

    if (diffGreen > threadSold) {
        return false
    }

    if (diffBlue > threadSold) {
        return false
    }

    return true
}

Configuring Log4j Loggers Programmatically

In the case that you have defined an appender in log4j properties and would like to update it programmatically, set the name in the log4j properties and get it by name.

Here's an example log4j.properties entry:

log4j.appender.stdout.Name=console
log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.Target=System.out
log4j.appender.stdout.Threshold=INFO

To update it, do the following:

((ConsoleAppender) Logger.getRootLogger().getAppender("console")).setThreshold(Level.DEBUG);

R legend placement in a plot

Building on @P-Lapointe solution, but making it extremely easy, you could use the maximum values from your data using max() and then you re-use those maximum values to set the legend xy coordinates. To make sure you don't get beyond the borders, you set up ylim slightly over the maximum values.

a=c(rnorm(1000))
b=c(rnorm(1000))
par(mfrow=c(1,2))
plot(a,ylim=c(0,max(a)+1))
legend(x=max(a)+0.5,legend="a",pch=1)
plot(a,b,ylim=c(0,max(b)+1),pch=2)
legend(x=max(b)-1.5,y=max(b)+1,legend="b",pch=2)

enter image description here

log4net hierarchy and logging levels

Try like this, it worked for me

<root>
  <!--<level value="ALL" />-->
  <level value="ERROR" />
  <level value="INFO" />
  <level value="WARN" />     
</root>

This logs 3 types of errors - error, info, and warning

SQL Server: Multiple table joins with a WHERE clause

You need to do a LEFT JOIN.

SELECT Computer.ComputerName, Application.Name, Software.Version
FROM Computer
JOIN dbo.Software_Computer
    ON Computer.ID = Software_Computer.ComputerID
LEFT JOIN dbo.Software
    ON Software_Computer.SoftwareID = Software.ID
RIGHT JOIN dbo.Application
    ON Application.ID = Software.ApplicationID
WHERE Computer.ID = 1 

Here is the explanation:

The result of a left outer join (or simply left join) for table A and B always contains all records of the "left" table (A), even if the join-condition does not find any matching record in the "right" table (B). This means that if the ON clause matches 0 (zero) records in B, the join will still return a row in the result—but with NULL in each column from B. This means that a left outer join returns all the values from the left table, plus matched values from the right table (or NULL in case of no matching join predicate). If the right table returns one row and the left table returns more than one matching row for it, the values in the right table will be repeated for each distinct row on the left table. From Oracle 9i onwards the LEFT OUTER JOIN statement can be used as well as (+).

Generic deep diff between two objects

I wrote a little class that is doing what you want, you can test it here.

Only thing that is different from your proposal is that I don't consider

[1,[{c: 1},2,3],{a:'hey'}]

and

[{a:'hey'},1,[3,{c: 1},2]]

to be same, because I think that arrays are not equal if order of their elements is not same. Of course this can be changed if needed. Also this code can be further enhanced to take function as argument that will be used to format diff object in arbitrary way based on passed primitive values (now this job is done by "compareValues" method).

_x000D_
_x000D_
var deepDiffMapper = function () {
  return {
    VALUE_CREATED: 'created',
    VALUE_UPDATED: 'updated',
    VALUE_DELETED: 'deleted',
    VALUE_UNCHANGED: 'unchanged',
    map: function(obj1, obj2) {
      if (this.isFunction(obj1) || this.isFunction(obj2)) {
        throw 'Invalid argument. Function given, object expected.';
      }
      if (this.isValue(obj1) || this.isValue(obj2)) {
        return {
          type: this.compareValues(obj1, obj2),
          data: obj1 === undefined ? obj2 : obj1
        };
      }

      var diff = {};
      for (var key in obj1) {
        if (this.isFunction(obj1[key])) {
          continue;
        }

        var value2 = undefined;
        if (obj2[key] !== undefined) {
          value2 = obj2[key];
        }

        diff[key] = this.map(obj1[key], value2);
      }
      for (var key in obj2) {
        if (this.isFunction(obj2[key]) || diff[key] !== undefined) {
          continue;
        }

        diff[key] = this.map(undefined, obj2[key]);
      }

      return diff;

    },
    compareValues: function (value1, value2) {
      if (value1 === value2) {
        return this.VALUE_UNCHANGED;
      }
      if (this.isDate(value1) && this.isDate(value2) && value1.getTime() === value2.getTime()) {
        return this.VALUE_UNCHANGED;
      }
      if (value1 === undefined) {
        return this.VALUE_CREATED;
      }
      if (value2 === undefined) {
        return this.VALUE_DELETED;
      }
      return this.VALUE_UPDATED;
    },
    isFunction: function (x) {
      return Object.prototype.toString.call(x) === '[object Function]';
    },
    isArray: function (x) {
      return Object.prototype.toString.call(x) === '[object Array]';
    },
    isDate: function (x) {
      return Object.prototype.toString.call(x) === '[object Date]';
    },
    isObject: function (x) {
      return Object.prototype.toString.call(x) === '[object Object]';
    },
    isValue: function (x) {
      return !this.isObject(x) && !this.isArray(x);
    }
  }
}();


var result = deepDiffMapper.map({
  a: 'i am unchanged',
  b: 'i am deleted',
  e: {
    a: 1,
    b: false,
    c: null
  },
  f: [1, {
    a: 'same',
    b: [{
      a: 'same'
    }, {
      d: 'delete'
    }]
  }],
  g: new Date('2017.11.25')
}, {
  a: 'i am unchanged',
  c: 'i am created',
  e: {
    a: '1',
    b: '',
    d: 'created'
  },
  f: [{
    a: 'same',
    b: [{
      a: 'same'
    }, {
      c: 'create'
    }]
  }, 1],
  g: new Date('2017.11.25')
});
console.log(result);
_x000D_
_x000D_
_x000D_

How to copy in bash all directory and files recursive?

code for a simple copy.

cp -r ./SourceFolder ./DestFolder

code for a copy with success result

cp -rv ./SourceFolder ./DestFolder

code for Forcefully if source contains any readonly file it will also copy

cp -rf ./SourceFolder ./DestFolder

for details help

cp --help

Logging levels - Logback - rule-of-thumb to assign log levels

This may also tangentially help, to understand if a logging request (from the code) at a certain level will result in it actually being logged given the effective logging level that a deployment is configured with. Decide what effective level you want to configure you deployment with from the other Answers here, and then refer to this to see if a particular logging request from your code will actually be logged then...

For examples:

  • "Will a logging code line that logs at WARN actually get logged on my deployment configured with ERROR?" The table says, NO.
  • "Will a logging code line that logs at WARN actually get logged on my deployment configured with DEBUG?" The table says, YES.

from logback documentation:

In a more graphic way, here is how the selection rule works. In the following table, the vertical header shows the level of the logging request, designated by p, while the horizontal header shows effective level of the logger, designated by q. The intersection of the rows (level request) and columns (effective level) is the boolean resulting from the basic selection rule. enter image description here

So a code line that requests logging will only actually get logged if the effective logging level of its deployment is less than or equal to that code line's requested level of severity.

unique() for more than one variable

There are a few ways to get all unique combinations of a set of factors.

with(df, interaction(yad, per, drop=TRUE))   # gives labels
with(df, yad:per)                            # ditto

aggregate(numeric(nrow(df)), df[c("yad", "per")], length)    # gives a data frame

Converting a factor to numeric without losing information R (as.numeric() doesn't seem to work)

First, factor consists of indices and levels. This fact is very very important when you are struggling with factor.

For example,

> z <- factor(letters[c(3, 2, 3, 4)])

# human-friendly display, but internal structure is invisible
> z
[1] c b c d
Levels: b c d

# internal structure of factor
> unclass(z)
[1] 2 1 2 3
attr(,"levels")
[1] "b" "c" "d"

here, z has 4 elements.
The index is 2, 1, 2, 3 in that order.
The level is associated with each index: 1 -> b, 2 -> c, 3 -> d.

Then, as.numeric converts simply the index part of factor into numeric.
as.character handles the index and levels, and generates character vector expressed by its level.

?as.numeric says that Factors are handled by the default method.

Rails: Missing host to link to! Please provide :host parameter or set default_url_options[:host]

just in case someone finds this searching for errors concerning ActiveStorage:

if you have a controller-action where you want to generate upload-urls etc with the local disc-service (most likely in test environment), you need to include ActiveStorage::SetCurrent in the controller in order to allow blob.service_url_for_direct_upload to work correctly.

fs: how do I locate a parent folder?

i'm running electron app and i can get the parent folder by path.resolve()

parent 1 level:path.resolve(__dirname, '..') + '/'

parent 2 levels:path.resolve(__dirname, '..', '..') + '/'

How to assign colors to categorical variables in ggplot2 that have stable mapping?

The easiest solution is to convert your categorical variable to a factor prior to the subsetting. Bottomline is that you need a factor variable with exact the same levels in all your subsets.

library(ggplot2)
dataset <- data.frame(category = rep(LETTERS[1:5], 100), 
    x = rnorm(500, mean = rep(1:5, 100)), y = rnorm(500, mean = rep(1:5, 100)))
dataset$fCategory <- factor(dataset$category)
subdata <- subset(dataset, category %in% c("A", "D", "E"))

With a character variable

ggplot(dataset, aes(x = x, y = y, colour = category)) + geom_point()
ggplot(subdata, aes(x = x, y = y, colour = category)) + geom_point()

With a factor variable

ggplot(dataset, aes(x = x, y = y, colour = fCategory)) + geom_point()
ggplot(subdata, aes(x = x, y = y, colour = fCategory)) + geom_point()

How to duplicate a git repository? (without forking)

See https://help.github.com/articles/duplicating-a-repository

Short version:

In order to make an exact duplicate, you need to perform both a bare-clone and a mirror-push:

mkdir foo; cd foo 
# move to a scratch dir

git clone --bare https://github.com/exampleuser/old-repository.git
# Make a bare clone of the repository

cd old-repository.git
git push --mirror https://github.com/exampleuser/new-repository.git
# Mirror-push to the new repository

cd ..
rm -rf old-repository.git  
# Remove our temporary local repository

NOTE: the above will work fine with any remote git repo, the instructions are not specific to github

The above creates a new remote copy of the repo. Then clone it down to your working machine.

Why are the Level.FINE logging messages not showing?

I found my actual problem and it was not mentioned in any answer: some of my unit-tests were causing logging initialization code to be run multiple times within the same test suite, messing up the logging on the later tests.

Is it possible to force Excel recognize UTF-8 CSV files automatically?

Yes it is possible. When writing the stream creating the csv, the first thing to do is this:

myStream.Write(Encoding.UTF8.GetPreamble(), 0, Encoding.UTF8.GetPreamble().Length)

How to use log levels in java

the different log levels are helpful for tools, whose can anaylse you log files. Normally a logfile contains lots of information. To avoid an information overload (or here an stackoverflow^^) you can use the log levels for grouping the information.

How can I configure Logback to log different levels for a logger to different destinations?

I use logback.groovy to configure my logback but you can do it with xml config as well:

import static ch.qos.logback.classic.Level.*
import static ch.qos.logback.core.spi.FilterReply.DENY
import static ch.qos.logback.core.spi.FilterReply.NEUTRAL
import ch.qos.logback.classic.boolex.GEventEvaluator
import ch.qos.logback.classic.encoder.PatternLayoutEncoder
import ch.qos.logback.core.ConsoleAppender
import ch.qos.logback.core.filter.EvaluatorFilter

def patternExpression = "%date{ISO8601} [%5level] %msg%n"

appender("STDERR", ConsoleAppender) {
    filter(EvaluatorFilter) {
      evaluator(GEventEvaluator) {
        expression = 'e.level.toInt() >= WARN.toInt()'
      }
      onMatch = NEUTRAL
      onMismatch = DENY
    }
    encoder(PatternLayoutEncoder) {
      pattern = patternExpression
    }
    target = "System.err"
  }

appender("STDOUT", ConsoleAppender) {
    filter(EvaluatorFilter) {
      evaluator(GEventEvaluator) {
        expression = 'e.level.toInt() < WARN.toInt()'
      }
      onMismatch = DENY
      onMatch = NEUTRAL
    }
    encoder(PatternLayoutEncoder) {
      pattern = patternExpression
    }
    target = "System.out"
}

logger("org.hibernate.type", WARN)
logger("org.hibernate", WARN)
logger("org.springframework", WARN)

root(INFO,["STDERR","STDOUT"])

I think to use GEventEvaluator is simplier because there is no need to create filter classes.
I apologize for my English!

How to recursively find and list the latest modified files in a directory with subdirectories and times

Quick Bash function:

# findLatestModifiedFiles(directory, [max=10, [format="%Td %Tb %TY, %TT"]])
function findLatestModifiedFiles() {
    local d="${1:-.}"
    local m="${2:-10}"
    local f="${3:-%Td %Tb %TY, %TT}"

    find "$d" -type f -printf "%T@ :$f %p\n" | sort -nr | cut -d: -f2- | head -n"$m"
}

Find the latest modified file in a directory:

findLatestModifiedFiles "/home/jason/" 1

You can also specify your own date/time format as the third argument.

incompatible character encodings: ASCII-8BIT and UTF-8

I encountered the error while migrating an app from Ruby 1.8.7 to 1.9.3 and it only occured in production. It turned out that I had some leftovers in my Memcache store. The now encoding sensitive Ruby 1.9.3 version of my app tried to mix old ASCII-8BIT values with new UTF-8.

It was as simple as flushing the cache to fix it for me.

Unresolved Import Issues with PyDev and Eclipse

I just upgraded a WXWindows project to Python 2.7 and had no end of trouble getting Pydev to recognize the new interpreter. Did the same thing as above configuring the interpreter, made a fresh install of Eclipse and Pydev. Thought some part of python must have been corrupt, so I re-installed everything again. Arghh! Closed and reopened the project, and restarted Eclipse between all of these changes.

FINALLY noticed you can 'remove the PyDev project config' by right clicking on project. Then it can be made into a PyDev project again, now it is good as gold!

Random / noise functions for GLSL

A straight, jagged version of 1d Perlin, essentially a random lfo zigzag.

half  rn(float xx){         
    half x0=floor(xx);
    half x1=x0+1;
    half v0 = frac(sin (x0*.014686)*31718.927+x0);
    half v1 = frac(sin (x1*.014686)*31718.927+x1);          

    return (v0*(1-frac(xx))+v1*(frac(xx)))*2-1*sin(xx);
}

I also have found 1-2-3-4d perlin noise on shadertoy owner inigo quilez perlin tutorial website, and voronoi and so forth, he has full fast implementations and codes for them.

getaddrinfo: nodename nor servname provided, or not known

I was seeing this error unrelated to rails. It turned out my test was trying to use a port that was too high (greater than 65535).

This code will produce the error in question

require 'socket'
Socket.getaddrinfo("127.0.0.1", "65536")

What are the options for storing hierarchical data in a relational database?

This design was not mentioned yet:

Multiple lineage columns

Though it has limitations, if you can bear them, it's very simple and very efficient. Features:

  • Columns: one for each lineage level, refers to all the parents up to the root, levels below the current items' level are set to 0 (or NULL)
  • There is a fixed limit to how deep the hierarchy can be
  • Cheap ancestors, descendants, level
  • Cheap insert, delete, move of the leaves
  • Expensive insert, delete, move of the internal nodes

Here follows an example - taxonomic tree of birds so the hierarchy is Class/Order/Family/Genus/Species - species is the lowest level, 1 row = 1 taxon (which corresponds to species in the case of the leaf nodes):

CREATE TABLE `taxons` (
  `TaxonId` smallint(6) NOT NULL default '0',
  `ClassId` smallint(6) default NULL,
  `OrderId` smallint(6) default NULL,
  `FamilyId` smallint(6) default NULL,
  `GenusId` smallint(6) default NULL,
  `Name` varchar(150) NOT NULL default ''
);

and the example of the data:

+---------+---------+---------+----------+---------+-------------------------------+
| TaxonId | ClassId | OrderId | FamilyId | GenusId | Name                          |
+---------+---------+---------+----------+---------+-------------------------------+
|     254 |       0 |       0 |        0 |       0 | Aves                          |
|     255 |     254 |       0 |        0 |       0 | Gaviiformes                   |
|     256 |     254 |     255 |        0 |       0 | Gaviidae                      |
|     257 |     254 |     255 |      256 |       0 | Gavia                         |
|     258 |     254 |     255 |      256 |     257 | Gavia stellata                |
|     259 |     254 |     255 |      256 |     257 | Gavia arctica                 |
|     260 |     254 |     255 |      256 |     257 | Gavia immer                   |
|     261 |     254 |     255 |      256 |     257 | Gavia adamsii                 |
|     262 |     254 |       0 |        0 |       0 | Podicipediformes              |
|     263 |     254 |     262 |        0 |       0 | Podicipedidae                 |
|     264 |     254 |     262 |      263 |       0 | Tachybaptus                   |

This is great because this way you accomplish all the needed operations in a very easy way, as long as the internal categories don't change their level in the tree.

Difference between "read commited" and "repeatable read"

Old question which has an accepted answer already, but I like to think of these two isolation levels in terms of how they change the locking behavior in SQL Server. This might be helpful for those who are debugging deadlocks like I was.

READ COMMITTED (default)

Shared locks are taken in the SELECT and then released when the SELECT statement completes. This is how the system can guarantee that there are no dirty reads of uncommitted data. Other transactions can still change the underlying rows after your SELECT completes and before your transaction completes.

REPEATABLE READ

Shared locks are taken in the SELECT and then released only after the transaction completes. This is how the system can guarantee that the values you read will not change during the transaction (because they remain locked until the transaction finishes).

generating variable names on fly in python

I got your problem , and here is my answer:

prices = [5, 12, 45]
list=['1','2','3']

for i in range(1,3):
  vars()["prices"+list[0]]=prices[0]
print ("prices[i]=" +prices[i])

so while printing:

price1 = 5 
price2 = 12
price3 = 45

Setting a spinner onClickListener() in Android

I suggest that all events for Spinner are divided on two types:

  1. User events (you meant as "click" event).

  2. Program events.

I also suggest that when you want to catch user event you just want to get rid off "program events". So it's pretty simple:

private void setSelectionWithoutDispatch(Spinner spinner, int position) {
    AdapterView.OnItemSelectedListener onItemSelectedListener = spinner.getOnItemSelectedListener();
    spinner.setOnItemSelectedListener(null);
    spinner.setSelection(position, false);
    spinner.setOnItemSelectedListener(onItemSelectedListener);
}

There's a key moment: you need setSelection(position, false). "false" in animation parameter will fire event immediately. The default behaviour is to push event to event queue.

How to view hierarchical package structure in Eclipse package explorer

Here is representation of screen eclipse to make hierarachical.

enter image description here

What is "runtime"?

Runtime is somewhat opposite to design-time and compile-time/link-time. Historically it comes from slow mainframe environment where machine-time was expensive.

Fast and Lean PDF Viewer for iPhone / iPad / iOS - tips and hints?

For a simple and effective PDF viewer, when you require only limited functionality, you can now (iOS 4.0+) use the QuickLook framework:

First, you need to link against QuickLook.framework and #import <QuickLook/QuickLook.h>;

Afterwards, in either viewDidLoad or any of the lazy initialization methods:

QLPreviewController *previewController = [[QLPreviewController alloc] init];
previewController.dataSource = self;
previewController.delegate = self;
previewController.currentPreviewItemIndex = indexPath.row;
[self presentModalViewController:previewController animated:YES];
[previewController release];

Google Maps v3 - limit viewable area and zoom level

This can be used to re-center the map to a specific location. Which is what I needed.

    var MapBounds = new google.maps.LatLngBounds(
    new google.maps.LatLng(35.676263, 13.949096),
    new google.maps.LatLng(36.204391, 14.89038));

    google.maps.event.addListener(GoogleMap, 'dragend', function ()
    {
        if (MapBounds.contains(GoogleMap.getCenter()))
        {
            return;
        }
        else
        {
            GoogleMap.setCenter(new google.maps.LatLng(35.920242, 14.428825));
        }
    });

Show percent % instead of counts in charts of categorical variables

this modified code should work

p = ggplot(mydataf, aes(x = foo)) + 
    geom_bar(aes(y = (..count..)/sum(..count..))) + 
    scale_y_continuous(formatter = 'percent')

if your data has NAs and you dont want them to be included in the plot, pass na.omit(mydataf) as the argument to ggplot.

hope this helps.

How to map a composite key with JPA and Hibernate?

Assuming you have the following database tables:

Database tables

First, you need to create the @Embeddable holding the composite identifier:

@Embeddable
public class EmployeeId implements Serializable {
 
    @Column(name = "company_id")
    private Long companyId;
 
    @Column(name = "employee_number")
    private Long employeeNumber;
 
    public EmployeeId() {
    }
 
    public EmployeeId(Long companyId, Long employeeId) {
        this.companyId = companyId;
        this.employeeNumber = employeeId;
    }
 
    public Long getCompanyId() {
        return companyId;
    }
 
    public Long getEmployeeNumber() {
        return employeeNumber;
    }
 
    @Override
    public boolean equals(Object o) {
        if (this == o) return true;
        if (!(o instanceof EmployeeId)) return false;
        EmployeeId that = (EmployeeId) o;
        return Objects.equals(getCompanyId(), that.getCompanyId()) &&
                Objects.equals(getEmployeeNumber(), that.getEmployeeNumber());
    }
 
    @Override
    public int hashCode() {
        return Objects.hash(getCompanyId(), getEmployeeNumber());
    }
}

With this in place, we can map the Employee entity which uses the composite identifier by annotating it with @EmbeddedId:

@Entity(name = "Employee")
@Table(name = "employee")
public class Employee {
 
    @EmbeddedId
    private EmployeeId id;
 
    private String name;
 
    public EmployeeId getId() {
        return id;
    }
 
    public void setId(EmployeeId id) {
        this.id = id;
    }
 
    public String getName() {
        return name;
    }
 
    public void setName(String name) {
        this.name = name;
    }
}

The Phone entity which has a @ManyToOne association to Employee, needs to reference the composite identifier from the parent class via two @JoinColumnmappings:

@Entity(name = "Phone")
@Table(name = "phone")
public class Phone {
 
    @Id
    @Column(name = "`number`")
    private String number;
 
    @ManyToOne
    @JoinColumns({
        @JoinColumn(
            name = "company_id",
            referencedColumnName = "company_id"),
        @JoinColumn(
            name = "employee_number",
            referencedColumnName = "employee_number")
    })
    private Employee employee;
 
    public Employee getEmployee() {
        return employee;
    }
 
    public void setEmployee(Employee employee) {
        this.employee = employee;
    }
 
    public String getNumber() {
        return number;
    }
 
    public void setNumber(String number) {
        this.number = number;
    }
}

How to convert a factor to integer\numeric without loss of information?

From the many answers I could read, the only given way was to expand the number of variables according to the number of factors. If you have a variable "pet" with levels "dog" and "cat", you would end up with pet_dog and pet_cat.

In my case I wanted to stay with the same number of variables, by just translating the factor variable to a numeric one, in a way that can applied to many variables with many levels, so that cat=1 and dog=0 for instance.

Please find the corresponding solution below:

crime <- data.frame(city = c("SF", "SF", "NYC"),
                    year = c(1990, 2000, 1990),
                    crime = 1:3)

indx <- sapply(crime, is.factor)

crime[indx] <- lapply(crime[indx], function(x){ 
  listOri <- unique(x)
  listMod <- seq_along(listOri)
  res <- factor(x, levels=listOri)
  res <- as.numeric(res)
  return(res)
}
)

Entity framework linq query Include() multiple children entities

You might find this article of interest which is available at codeplex.com.

The article presents a new way of expressing queries that span multiple tables in the form of declarative graph shapes.

Moreover, the article contains a thorough performance comparison of this new approach with EF queries. This analysis shows that GBQ quickly outperforms EF queries.

Convert data.frame columns from factors to characters

At the beginning of your data frame include stringsAsFactors = FALSE to ignore all misunderstandings.

Login failed for user 'DOMAIN\MACHINENAME$'

My issue turned out to be in the Publish settings in Visual Studio. My appsettings.json connection string was correct, but the Database connection strings in the publish settings, had integrated security = true.

How to style an asp.net menu with CSS

I remember the StaticSelectedStyle-CssClass attribute used to work in ASP.NET 2.0. And in .NET 4.0 if you change the Menu control's RenderingMode attribute "Table" (thus making it render the menu as s and sub-s like it did back '05) it will at least write your specified StaticSelectedStyle-CssClass into the proper html element.

That may be enough for you page to work like you want. However my work-around for the selected menu item in ASP 4.0 (when leaving RenderingMode to its default), is to mimic the control's generated "selected" CSS class but give mine the "!important" CSS declaration so my styles take precedence where needed.

For instance by default the Menu control renders an "li" element and child "a" for each menu item and the selected menu item's "a" element will contain class="selected" (among other control generated CSS class names including "static" if its a static menu item), therefore I add my own selector to the page (or in a separate stylesheet file) for "static" and "selected" "a" tags like so:

a.selected.static
{
background-color: #f5f5f5 !important;
border-top: Red 1px solid !important;
border-left: Red 1px solid !important;
border-right: Red 1px solid !important;
} 

How do I write outputs to the Log in Android?

You can use my libary called RDALogger. Here is github link.

With this library, you can log your message with method name/class name/line number and anchor link. With this link, when you click log, screen goes to this line of code.

To use library, you must do implementations below.

in root level gradle

allprojects {
        repositories {
            ...
            maven { url 'https://jitpack.io' }
        }
    }

in app level gradle

dependencies {
            implementation 'com.github.ardakaplan:RDALogger:1.0.0'
    }

For initializing library, you should start like this (in Application.class or before first use)

RDALogger.start("TAG NAME").enableLogging(true);

And than you can log whatever you want;

    RDALogger.info("info");
    RDALogger.debug("debug");
    RDALogger.verbose("verbose");
    RDALogger.warn("warn");
    RDALogger.error("error");
    RDALogger.error(new Throwable());
    RDALogger.error("error", new Throwable());

And finally output shows you all you want (class name, method name, anchor link, message)

08-09 11:13:06.023 20025-20025/com.ardakaplan.application I/Application: IN CLASS : (ENApplication.java:29)   ///   IN METHOD : onCreate
    info

CSS: How to have position:absolute div inside a position:relative div not be cropped by an overflow:hidden on a container

If there is other content not being shown inside the outer-div (the green box), why not have that content wrapped inside another div, let's call it "content". Have overflow hidden on this new inner-div, but keep overflow visible on the green box.

The only catch is that you will then have to mess around to make sure that the content div doesn't interfere with the positioning of the red box, but it sounds like you should be able to fix that with little headache.

<div id="1" background: #efe; padding: 5px; width: 125px">
    <div id="content" style="overflow: hidden;">
    </div>
    <div id="2" style="position: relative; background: #fee; padding: 2px; width: 100px; height: 100px">
        <div id="3" style="position: absolute; top: 10px; background: #eef; padding: 2px; width: 75px; height: 150px"/>
    </div>
</div>

How to check in Javascript if one element is contained within another

Update: There's now a native way to achieve this. Node.contains(). Mentioned in comment and below answers as well.

Old answer:

Using the parentNode property should work. It's also pretty safe from a cross-browser standpoint. If the relationship is known to be one level deep, you could check it simply:

if (element2.parentNode == element1) { ... }

If the the child can be nested arbitrarily deep inside the parent, you could use a function similar to the following to test for the relationship:

function isDescendant(parent, child) {
     var node = child.parentNode;
     while (node != null) {
         if (node == parent) {
             return true;
         }
         node = node.parentNode;
     }
     return false;
}

jQuery Combobox/select autocomplete?

[edit] The lovely chosen jQuery plugin has been bought to my attention, looks like a great alternative to me.

Or if you just want to use jQuery autocomplete, I've extended the combobox example to support defaults and remove the tooltips to give what I think is more expected behaviour. Try it out.

(function ($) {
    $.widget("ui.combobox", {
        _create: function () {
            var input,
              that = this,
              wasOpen = false,
              select = this.element.hide(),
              selected = select.children(":selected"),
              defaultValue = selected.text() || "",
              wrapper = this.wrapper = $("<span>")
                .addClass("ui-combobox")
                .insertAfter(select);

            function removeIfInvalid(element) {
                var value = $(element).val(),
                  matcher = new RegExp("^" + $.ui.autocomplete.escapeRegex(value) + "$", "i"),
                  valid = false;
                select.children("option").each(function () {
                    if ($(this).text().match(matcher)) {
                        this.selected = valid = true;
                        return false;
                    }
                });

                if (!valid) {
                    // remove invalid value, as it didn't match anything
                    $(element).val(defaultValue);
                    select.val(defaultValue);
                    input.data("ui-autocomplete").term = "";
                }
            }

            input = $("<input>")
              .appendTo(wrapper)
              .val(defaultValue)
              .attr("title", "")
              .addClass("ui-state-default ui-combobox-input")
              .width(select.width())
              .autocomplete({
                  delay: 0,
                  minLength: 0,
                  autoFocus: true,
                  source: function (request, response) {
                      var matcher = new RegExp($.ui.autocomplete.escapeRegex(request.term), "i");
                      response(select.children("option").map(function () {
                          var text = $(this).text();
                          if (this.value && (!request.term || matcher.test(text)))
                              return {
                                  label: text.replace(
                                    new RegExp(
                                      "(?![^&;]+;)(?!<[^<>]*)(" +
                                      $.ui.autocomplete.escapeRegex(request.term) +
                                      ")(?![^<>]*>)(?![^&;]+;)", "gi"
                                    ), "<strong>$1</strong>"),
                                  value: text,
                                  option: this
                              };
                      }));
                  },
                  select: function (event, ui) {
                      ui.item.option.selected = true;
                      that._trigger("selected", event, {
                          item: ui.item.option
                      });
                  },
                  change: function (event, ui) {
                      if (!ui.item) {
                          removeIfInvalid(this);
                      }
                  }
              })
              .addClass("ui-widget ui-widget-content ui-corner-left");

            input.data("ui-autocomplete")._renderItem = function (ul, item) {
                return $("<li>")
                  .append("<a>" + item.label + "</a>")
                  .appendTo(ul);
            };

            $("<a>")
              .attr("tabIndex", -1)
              .appendTo(wrapper)
              .button({
                  icons: {
                      primary: "ui-icon-triangle-1-s"
                  },
                  text: false
              })
              .removeClass("ui-corner-all")
              .addClass("ui-corner-right ui-combobox-toggle")
              .mousedown(function () {
                  wasOpen = input.autocomplete("widget").is(":visible");
              })
              .click(function () {
                  input.focus();

                  // close if already visible
                  if (wasOpen) {
                      return;
                  }

                  // pass empty string as value to search for, displaying all results
                  input.autocomplete("search", "");
              });
        },

        _destroy: function () {
            this.wrapper.remove();
            this.element.show();
        }
    });
})(jQuery);

When to use the different log levels

An error is something that is wrong, plain wrong, no way around it, it needs to be fixed.

A warning is a sign of a pattern that might be wrong, but then also might not be.

Having said that, I cannot come up with a good example of a warning that isn't also an error. What I mean by that is that if you go to the trouble of logging a warning, you might as well fix the underlying issue.

However, things like "sql execution takes too long" might be a warning, while "sql execution deadlocks" is an error, so perhaps there's some cases after all.

How do I enable/disable log levels in Android?

You should use

    if (Log.isLoggable(TAG, Log.VERBOSE)) {
        Log.v(TAG, "my log message");
    }

Simplest way to do a recursive self-join?

The Quassnoi query with a change for large table. Parents with more childs then 10: Formating as str(5) the row_number()

WITH    q AS 
        (
        SELECT  m.*, CAST(str(ROW_NUMBER() OVER (ORDER BY m.ordernum),5) AS VARCHAR(MAX)) COLLATE Latin1_General_BIN AS bc
        FROM    #t m
        WHERE   ParentID =0
        UNION ALL
        SELECT  m.*,  q.bc + '.' + str(ROW_NUMBER()  OVER (PARTITION BY m.ParentID ORDER BY m.ordernum),5) COLLATE Latin1_General_BIN
        FROM    #t m
        JOIN    q
        ON      m.parentID = q.DBID
        )
SELECT  *
FROM    q
ORDER BY
        bc

Setting up an MS-Access DB for multi-user access

The correct way of building client/server Microsoft Access applications where the data is stored in a RDBMS is to use the Linked Table method. This ensures Data Isolation and Concurrency is maintained between the Microsoft Access client application and the RDBMS data with no additional and unnecessary programming logic and code which makes maintenance more difficult, and adds to development time.

see: http://claysql.blogspot.com/2014/08/normal-0-false-false-false-en-us-x-none.html

Random color generator

Random color generation with brightness control:

function getRandColor(brightness){

    // Six levels of brightness from 0 to 5, 0 being the darkest
    var rgb = [Math.random() * 256, Math.random() * 256, Math.random() * 256];
    var mix = [brightness*51, brightness*51, brightness*51]; //51 => 255/5
    var mixedrgb = [rgb[0] + mix[0], rgb[1] + mix[1], rgb[2] + mix[2]].map(function(x){ return Math.round(x/2.0)})
    return "rgb(" + mixedrgb.join(",") + ")";
}

How to sort a dataframe by multiple column(s)

Another alternative, using the rgr package:

> library(rgr)
> gx.sort.df(dd, ~ -z+b)
    b x y z
4 Low C 9 2
2 Med D 3 1
1  Hi A 8 1
3  Hi A 9 1

Drop unused factor levels in a subsetted data frame

It is a known issue, and one possible remedy is provided by drop.levels() in the gdata package where your example becomes

> drop.levels(subdf)
  letters numbers
1       a       1
2       b       2
3       c       3
> levels(drop.levels(subdf)$letters)
[1] "a" "b" "c"

There is also the dropUnusedLevels function in the Hmisc package. However, it only works by altering the subset operator [ and is not applicable here.

As a corollary, a direct approach on a per-column basis is a simple as.factor(as.character(data)):

> levels(subdf$letters)
[1] "a" "b" "c" "d" "e"
> subdf$letters <- as.factor(as.character(subdf$letters))
> levels(subdf$letters)
[1] "a" "b" "c"

What is DOM element?

DOM stands for Document Object Model. It is the W3C(World Wide Web Consortium) Standard. It define standard for accessing and manipulating HTML and XML document and The elements of DOM is head,title,body tag etc. So the answer of your first statement is

Statement #1 You can add multiple classes to a single DOM element.

Explanation : "div class="cssclass1 cssclass2 cssclass3"

Here tag is element of DOM and i have applied multiple classes to DOM element.

What does Ruby have that Python doesn't, and vice versa?

From Ruby's website:

Similarities As with Python, in Ruby,...

  • There’s an interactive prompt (called irb).
  • You can read docs on the command line (with the ri command instead of pydoc).
  • There are no special line terminators (except the usual newline).
  • String literals can span multiple lines like Python’s triple-quoted strings.
  • Brackets are for lists, and braces are for dicts (which, in Ruby, are called “hashes”).
  • Arrays work the same (adding them makes one long array, but composing them like this a3 = [ a1, a2 ] gives you an array of arrays).
  • Objects are strongly and dynamically typed.
  • Everything is an object, and variables are just references to objects.
  • Although the keywords are a bit different, exceptions work about the same.
  • You’ve got embedded doc tools (Ruby’s is called rdoc).

Differences Unlike Python, in Ruby,...

  • Strings are mutable.
  • You can make constants (variables whose value you don’t intend to change).
  • There are some enforced case-conventions (ex. class names start with a capital letter, variables start with a lowercase letter).
  • There’s only one kind of list container (an Array), and it’s mutable.
  • Double-quoted strings allow escape sequences (like \t) and a special “expression substitution” syntax (which allows you to insert the results of Ruby expressions directly into other strings without having to "add " + "strings " + "together"). Single-quoted strings are like Python’s r"raw strings".
  • There are no “new style” and “old style” classes. Just one kind.
  • You never directly access attributes. With Ruby, it’s all method calls.
  • Parentheses for method calls are usually optional.
  • There’s public, private, and protected to enforce access, instead of Python’s _voluntary_ underscore __convention__.
  • “mixin’s” are used instead of multiple inheritance.
  • You can add or modify the methods of built-in classes. Both languages let you open up and modify classes at any point, but Python prevents modification of built-ins — Ruby does not.
  • You’ve got true and false instead of True and False (and nil instead of None).
  • When tested for truth, only false and nil evaluate to a false value. Everything else is true (including 0, 0.0, "", and []).
  • It’s elsif instead of elif.
  • It’s require instead of import. Otherwise though, usage is the same.
  • The usual-style comments on the line(s) above things (instead of docstrings below them) are used for generating docs.
  • There are a number of shortcuts that, although give you more to remember, you quickly learn. They tend to make Ruby fun and very productive.

Nesting optgroups in a dropdownlist/select

I really like the Broken Arrow's solution above in this post. I have just improved/changed it a bit so that what was called labels can be toggled and are not considered options. I have used a small piece of jQuery, but this could be done without jQuery.

I have replaced intermediate labels (no leaf labels) with links, which call a function on click. This function is in charge of toggling the next div of the clicked link, so that it expands/collapses the options. This avoids the possibility of selecting an intermediate element in the hierarchy, which usually is something desired. Making a variant that allows to select intermediate elements should be easy.

This is the modified html:

<div class="NestedSelect">
    <a onclick="toggleDiv(this)">Fruit</a>
    <div>
        <label>
            <input type="radio" name="MySelectInputName"><span>Apple</span></label>
        <label>
            <input type="radio" name="MySelectInputName"><span>Banana</span></label>
        <label>
            <input type="radio" name="MySelectInputName"><span>Orange</span></label>
    </div>

    <a onclick="toggleDiv(this)">Drink</a>
    <div>
        <label>
            <input type="radio" name="MySelectInputName"><span>Water</span></label>

        <a onclick="toggleDiv(this)">Soft</a>
        <div>
            <label>
                <input type="radio" name="MySelectInputName"><span>Cola</span></label>
            <label>
                <input type="radio" name="MySelectInputName"><span>Soda</span></label>
            <label>
                <input type="radio" name="MySelectInputName"><span>Lemonade</span></label>
        </div>

        <a onclick="toggleDiv(this)">Hard</a>
        <div>
            <label>
                <input type="radio" name="MySelectInputName"><span>Bear</span></label>
            <label>
                <input type="radio" name="MySelectInputName"><span>Whisky</span></label>
            <label>
                <input type="radio" name="MySelectInputName"><span>Vodka</span></label>
            <label>
                <input type="radio" name="MySelectInputName"><span>Gin</span></label>
        </div>
    </div>
</div>

A small javascript/jQuery function:

function toggleDiv(element) {
    $(element).next('div').toggle('medium');
}

And the css:

.NestedSelect {
    display: inline-block;
    height: 100%;
    border: 1px Black solid;
    overflow-y: scroll;
}

.NestedSelect a:hover, .NestedSelect span:hover  {
    background-color: #0092ff;
    color: White;
    cursor: pointer;
}

.NestedSelect input[type="radio"] {
    display: none;
}

.NestedSelect input[type="radio"] + span {
    display: block;
    padding-left: 0px;
    padding-right: 5px;
}

.NestedSelect input[type="radio"]:checked + span {
    background-color: Black;
    color: White;
}

.NestedSelect div {
    display: none;
    margin-left: 15px;
    border-left: 1px black
    solid;
}

.NestedSelect label > span:before, .NestedSelect a:before{
    content: '- ';
}

.NestedSelect a {
    display: block;
}

Running sample in JSFiddle

Logging in Scala

With Scala 2.10+ Consider ScalaLogging by Typesafe. Uses macros to deliver a very clean API

https://github.com/typesafehub/scala-logging

Quoting from their wiki:

Fortunately Scala macros can be used to make our lives easier: ScalaLogging offers the class Logger with lightweight logging methods that will be expanded to the above idiom. So all we have to write is:

logger.debug(s"Some ${expensiveExpression} message!")

After the macro has been applied, the code will have been transformed into the above described idiom.

In addition ScalaLogging offers the trait Logging which conveniently provides a Logger instance initialized with the name of the class mixed into:

import com.typesafe.scalalogging.slf4j.LazyLogging

class MyClass extends LazyLogging {
  logger.debug("This is very convenient ;-)")
}

How do I find out what is hammering my SQL Server?

Run either of these a few second apart. You'll detect the high CPU connection. Or: stored CPU in a local variable, WAITFOR DELAY, compare stored and current CPU values

select * from master..sysprocesses
where status = 'runnable' --comment this out
order by CPU
desc

select * from master..sysprocesses
order by CPU
desc

May not be the most elegant but it'd effective and quick.

How is the default submit button on an HTML form determined?

my recipe:

<form>
<input type=hidden name=action value=login><!-- the magic! -->

<input type=text name=email>
<input type=text name=password>

<input type=submit name=action value=login>
<input type=submit name=action value="forgot password">
</form>

It will send the default hidden field if none of the buttons are 'clicked'.

if they are clicked, they have preference and it's value is passed.

How can I exclude all "permission denied" messages from "find"?

Those errors are printed out to the standard error output (fd 2). To filter them out, simply redirect all errors to /dev/null:

find . 2>/dev/null > some_file

or first join stderr and stdout and then grep out those specific errors:

find . 2>&1 | grep -v 'Permission denied' > some_file

Creating multiple log files of different content with log4j

This should get you started:

log4j.rootLogger=QuietAppender, LoudAppender, TRACE
# setup A1
log4j.appender.QuietAppender=org.apache.log4j.RollingFileAppender
log4j.appender.QuietAppender.Threshold=INFO
log4j.appender.QuietAppender.File=quiet.log
...


# setup A2
log4j.appender.LoudAppender=org.apache.log4j.RollingFileAppender
log4j.appender.LoudAppender.Threshold=DEBUG
log4j.appender.LoudAppender.File=loud.log
...

log4j.logger.com.yourpackage.yourclazz=TRACE

How do you overcome the HTML form nesting limitation?

I think Jason's right. If your "Delete" action is a minimal one, make that be in a form by itself, and line it up with the other buttons so as to make the interface look like one unified form, even if it's not.

Or, of course, redesign your interface, and let people delete somewhere else entirely which doesn't require them to see the enormo-form at all.

Logging best practices


Update: For extensions to System.Diagnostics, providing some of the missing listeners you might want, see Essential.Diagnostics on CodePlex (http://essentialdiagnostics.codeplex.com/)


Frameworks

Q: What frameworks do you use?

A: System.Diagnostics.TraceSource, built in to .NET 2.0.

It provides powerful, flexible, high performance logging for applications, however many developers are not aware of its capabilities and do not make full use of them.

There are some areas where additional functionality is useful, or sometimes the functionality exists but is not well documented, however this does not mean that the entire logging framework (which is designed to be extensible) should be thrown away and completely replaced like some popular alternatives (NLog, log4net, Common.Logging, and even EntLib Logging).

Rather than change the way you add logging statements to your application and re-inventing the wheel, just extended the System.Diagnostics framework in the few places you need it.

It seems to me the other frameworks, even EntLib, simply suffer from Not Invented Here Syndrome, and I think they have wasted time re-inventing the basics that already work perfectly well in System.Diagnostics (such as how you write log statements), rather than filling in the few gaps that exist. In short, don't use them -- they aren't needed.

Features you may not have known:

  • Using the TraceEvent overloads that take a format string and args can help performance as parameters are kept as separate references until after Filter.ShouldTrace() has succeeded. This means no expensive calls to ToString() on parameter values until after the system has confirmed message will actually be logged.
  • The Trace.CorrelationManager allows you to correlate log statements about the same logical operation (see below).
  • VisualBasic.Logging.FileLogTraceListener is good for writing to log files and supports file rotation. Although in the VisualBasic namespace, it can be just as easily used in a C# (or other language) project simply by including the DLL.
  • When using EventLogTraceListener if you call TraceEvent with multiple arguments and with empty or null format string, then the args are passed directly to the EventLog.WriteEntry() if you are using localized message resources.
  • The Service Trace Viewer tool (from WCF) is useful for viewing graphs of activity correlated log files (even if you aren't using WCF). This can really help debug complex issues where multiple threads/activites are involved.
  • Avoid overhead by clearing all listeners (or removing Default); otherwise Default will pass everything to the trace system (and incur all those ToString() overheads).

Areas you might want to look at extending (if needed):

  • Database trace listener
  • Colored console trace listener
  • MSMQ / Email / WMI trace listeners (if needed)
  • Implement a FileSystemWatcher to call Trace.Refresh for dynamic configuration changes

Other Recommendations:

Use structed event id's, and keep a reference list (e.g. document them in an enum).

Having unique event id's for each (significant) event in your system is very useful for correlating and finding specific issues. It is easy to track back to the specific code that logs/uses the event ids, and can make it easy to provide guidance for common errors, e.g. error 5178 means your database connection string is wrong, etc.

Event id's should follow some kind of structure (similar to the Theory of Reply Codes used in email and HTTP), which allows you to treat them by category without knowing specific codes.

e.g. The first digit can detail the general class: 1xxx can be used for 'Start' operations, 2xxx for normal behaviour, 3xxx for activity tracing, 4xxx for warnings, 5xxx for errors, 8xxx for 'Stop' operations, 9xxx for fatal errors, etc.

The second digit can detail the area, e.g. 21xx for database information (41xx for database warnings, 51xx for database errors), 22xx for calculation mode (42xx for calculation warnings, etc), 23xx for another module, etc.

Assigned, structured event id's also allow you use them in filters.

Q: If you use tracing, do you make use of Trace.Correlation.StartLogicalOperation?

A: Trace.CorrelationManager is very useful for correlating log statements in any sort of multi-threaded environment (which is pretty much anything these days).

You need at least to set the ActivityId once for each logical operation in order to correlate.

Start/Stop and the LogicalOperationStack can then be used for simple stack-based context. For more complex contexts (e.g. asynchronous operations), using TraceTransfer to the new ActivityId (before changing it), allows correlation.

The Service Trace Viewer tool can be useful for viewing activity graphs (even if you aren't using WCF).

Q: Do you write this code manually, or do you use some form of aspect oriented programming to do it? Care to share a code snippet?

A: You may want to create a scope class, e.g. LogicalOperationScope, that (a) sets up the context when created and (b) resets the context when disposed.

This allows you to write code such as the following to automatically wrap operations:

  using( LogicalOperationScope operation = new LogicalOperationScope("Operation") )
  {
    // .. do work here
  }

On creation the scope could first set ActivityId if needed, call StartLogicalOperation and then log a TraceEventType.Start message. On Dispose it could log a Stop message, and then call StopLogicalOperation.

Q: Do you provide any form of granularity over trace sources? E.g., WPF TraceSources allow you to configure them at various levels.

A: Yes, multiple Trace Sources are useful / important as systems get larger.

Whilst you probably want to consistently log all Warning & above, or all Information & above messages, for any reasonably sized system the volume of Activity Tracing (Start, Stop, etc) and Verbose logging simply becomes too much.

Rather than having only one switch that turns it all either on or off, it is useful to be able to turn on this information for one section of your system at a time.

This way, you can locate significant problems from the usually logging (all warnings, errors, etc), and then "zoom in" on the sections you want and set them to Activity Tracing or even Debug levels.

The number of trace sources you need depends on your application, e.g. you may want one trace source per assembly or per major section of your application.

If you need even more fine tuned control, add individual boolean switches to turn on/off specific high volume tracing, e.g. raw message dumps. (Or a separate trace source could be used, similar to WCF/WPF).

You might also want to consider separate trace sources for Activity Tracing vs general (other) logging, as it can make it a bit easier to configure filters exactly how you want them.

Note that messages can still be correlated via ActivityId even if different sources are used, so use as many as you need.


Listeners

Q: What log outputs do you use?

This can depend on what type of application you are writing, and what things are being logged. Usually different things go in different places (i.e. multiple outputs).

I generally classify outputs into three groups:

(1) Events - Windows Event Log (and trace files)

e.g. If writing a server/service, then best practice on Windows is to use the Windows Event Log (you don't have a UI to report to).

In this case all Fatal, Error, Warning and (service-level) Information events should go to the Windows Event Log. The Information level should be reserved for these type of high level events, the ones that you want to go in the event log, e.g. "Service Started", "Service Stopped", "Connected to Xyz", and maybe even "Schedule Initiated", "User Logged On", etc.

In some cases you may want to make writing to the event log a built-in part of your application and not via the trace system (i.e. write Event Log entries directly). This means it can't accidentally be turned off. (Note you still also want to note the same event in your trace system so you can correlate).

In contrast, a Windows GUI application would generally report these to the user (although they may also log to the Windows Event Log).

Events may also have related performance counters (e.g. number of errors/sec), and it can be important to co-ordinate any direct writing to the Event Log, performance counters, writing to the trace system and reporting to the user so they occur at the same time.

i.e. If a user sees an error message at a particular time, you should be able to find the same error message in the Windows Event Log, and then the same event with the same timestamp in the trace log (along with other trace details).

(2) Activities - Application Log files or database table (and trace files)

This is the regular activity that a system does, e.g. web page served, stock market trade lodged, order taken, calculation performed, etc.

Activity Tracing (start, stop, etc) is useful here (at the right granuality).

Also, it is very common to use a specific Application Log (sometimes called an Audit Log). Usually this is a database table or an application log file and contains structured data (i.e. a set of fields).

Things can get a bit blurred here depending on your application. A good example might be a web server which writes each request to a web log; similar examples might be a messaging system or calculation system where each operation is logged along with application-specific details.

A not so good example is stock market trades or a sales ordering system. In these systems you are probably already logging the activity as they have important business value, however the principal of correlating them to other actions is still important.

As well as custom application logs, activities also often have related peformance counters, e.g. number of transactions per second.

In generally you should co-ordinate logging of activities across different systems, i.e. write to your application log at the same time as you increase your performance counter and log to your trace system. If you do all at the same time (or straight after each other in the code), then debugging problems is easier (than if they all occur at diffent times/locations in the code).

(3) Debug Trace - Text file, or maybe XML or database.

This is information at Verbose level and lower (e.g. custom boolean switches to turn on/off raw data dumps). This provides the guts or details of what a system is doing at a sub-activity level.

This is the level you want to be able to turn on/off for individual sections of your application (hence the multiple sources). You don't want this stuff cluttering up the Windows Event Log. Sometimes a database is used, but more likely are rolling log files that are purged after a certain time.

A big difference between this information and an Application Log file is that it is unstructured. Whilst an Application Log may have fields for To, From, Amount, etc., Verbose debug traces may be whatever a programmer puts in, e.g. "checking values X={value}, Y=false", or random comments/markers like "Done it, trying again".

One important practice is to make sure things you put in application log files or the Windows Event Log also get logged to the trace system with the same details (e.g. timestamp). This allows you to then correlate the different logs when investigating.

If you are planning to use a particular log viewer because you have complex correlation, e.g. the Service Trace Viewer, then you need to use an appropriate format i.e. XML. Otherwise, a simple text file is usually good enough -- at the lower levels the information is largely unstructured, so you might find dumps of arrays, stack dumps, etc. Provided you can correlated back to more structured logs at higher levels, things should be okay.

Q: If using files, do you use rolling logs or just a single file? How do you make the logs available for people to consume?

A: For files, generally you want rolling log files from a manageability point of view (with System.Diagnostics simply use VisualBasic.Logging.FileLogTraceListener).

Availability again depends on the system. If you are only talking about files then for a server/service, rolling files can just be accessed when necessary. (Windows Event Log or Database Application Logs would have their own access mechanisms).

If you don't have easy access to the file system, then debug tracing to a database may be easier. [i.e. implement a database TraceListener].

One interesting solution I saw for a Windows GUI application was that it logged very detailed tracing information to a "flight recorder" whilst running and then when you shut it down if it had no problems then it simply deleted the file.

If, however it crashed or encountered a problem then the file was not deleted. Either if it catches the error, or the next time it runs it will notice the file, and then it can take action, e.g. compress it (e.g. 7zip) and email it or otherwise make available.

Many systems these days incorporate automated reporting of failures to a central server (after checking with users, e.g. for privacy reasons).


Viewing

Q: What tools to you use for viewing the logs?

A: If you have multiple logs for different reasons then you will use multiple viewers.

Notepad/vi/Notepad++ or any other text editor is the basic for plain text logs.

If you have complex operations, e.g. activities with transfers, then you would, obviously, use a specialized tool like the Service Trace Viewer. (But if you don't need it, then a text editor is easier).

As I generally log high level information to the Windows Event Log, then it provides a quick way to get an overview, in a structured manner (look for the pretty error/warning icons). You only need to start hunting through text files if there is not enough in the log, although at least the log gives you a starting point. (At this point, making sure your logs have co-ordinated entires becomes useful).

Generally the Windows Event Log also makes these significant events available to monitoring tools like MOM or OpenView.

Others --

If you log to a Database it can be easy to filter and sort informatio (e.g. zoom in on a particular activity id. (With text files you can use Grep/PowerShell or similar to filter on the partiular GUID you want)

MS Excel (or another spreadsheet program). This can be useful for analysing structured or semi-structured information if you can import it with the right delimiters so that different values go in different columns.

When running a service in debug/test I usually host it in a console application for simplicity I find a colored console logger useful (e.g. red for errors, yellow for warnings, etc). You need to implement a custom trace listener.

Note that the framework does not include a colored console logger or a database logger so, right now, you would need to write these if you need them (it's not too hard).

It really annoys me that several frameworks (log4net, EntLib, etc) have wasted time re-inventing the wheel and re-implemented basic logging, filtering, and logging to text files, the Windows Event Log, and XML files, each in their own different way (log statements are different in each); each has then implemented their own version of, for example, a database logger, when most of that already existed and all that was needed was a couple more trace listeners for System.Diagnostics. Talk about a big waste of duplicate effort.

Q: If you are building an ASP.NET solution, do you also use ASP.NET Health Monitoring? Do you include trace output in the health monitor events? What about Trace.axd?

These things can be turned on/off as needed. I find Trace.axd quite useful for debugging how a server responds to certain things, but it's not generally useful in a heavily used environment or for long term tracing.

Q: What about custom performance counters?

For a professional application, especially a server/service, I expect to see it fully instrumented with both Performance Monitor counters and logging to the Windows Event Log. These are the standard tools in Windows and should be used.

You need to make sure you include installers for the performance counters and event logs that you use; these should be created at installation time (when installing as administrator). When your application is running normally it should not need have administration privileges (and so won't be able to create missing logs).

This is a good reason to practice developing as a non-administrator (have a separate admin account for when you need to install services, etc). If writing to the Event Log, .NET will automatically create a missing log the first time you write to it; if you develop as a non-admin you will catch this early and avoid a nasty surprise when a customer installs your system and then can't use it because they aren't running as administrator.

The Definitive C Book Guide and List

Beginner

Introductory, no previous programming experience

  • C++ Primer * (Stanley Lippman, Josée Lajoie, and Barbara E. Moo) (updated for C++11) Coming at 1k pages, this is a very thorough introduction into C++ that covers just about everything in the language in a very accessible format and in great detail. The fifth edition (released August 16, 2012) covers C++11. [Review]

    * Not to be confused with C++ Primer Plus (Stephen Prata), with a significantly less favorable review.

  • Programming: Principles and Practice Using C++ (Bjarne Stroustrup, 2nd Edition - May 25, 2014) (updated for C++11/C++14) An introduction to programming using C++ by the creator of the language. A good read, that assumes no previous programming experience, but is not only for beginners.

Introductory, with previous programming experience

  • A Tour of C++ (Bjarne Stroustrup) (2nd edition for C++17) The “tour” is a quick (about 180 pages and 14 chapters) tutorial overview of all of standard C++ (language and standard library, and using C++11) at a moderately high level for people who already know C++ or at least are experienced programmers. This book is an extended version of the material that constitutes Chapters 2-5 of The C++ Programming Language, 4th edition.

  • Accelerated C++ (Andrew Koenig and Barbara Moo, 1st Edition - August 24, 2000) This basically covers the same ground as the C++ Primer, but does so on a fourth of its space. This is largely because it does not attempt to be an introduction to programming, but an introduction to C++ for people who've previously programmed in some other language. It has a steeper learning curve, but, for those who can cope with this, it is a very compact introduction to the language. (Historically, it broke new ground by being the first beginner's book to use a modern approach to teaching the language.) Despite this, the C++ it teaches is purely C++98. [Review]

Best practices

  • Effective C++ (Scott Meyers, 3rd Edition - May 22, 2005) This was written with the aim of being the best second book C++ programmers should read, and it succeeded. Earlier editions were aimed at programmers coming from C, the third edition changes this and targets programmers coming from languages like Java. It presents ~50 easy-to-remember rules of thumb along with their rationale in a very accessible (and enjoyable) style. For C++11 and C++14 the examples and a few issues are outdated and Effective Modern C++ should be preferred. [Review]

  • Effective Modern C++ (Scott Meyers) This is basically the new version of Effective C++, aimed at C++ programmers making the transition from C++03 to C++11 and C++14.

  • Effective STL (Scott Meyers) This aims to do the same to the part of the standard library coming from the STL what Effective C++ did to the language as a whole: It presents rules of thumb along with their rationale. [Review]


Intermediate

  • More Effective C++ (Scott Meyers) Even more rules of thumb than Effective C++. Not as important as the ones in the first book, but still good to know.

  • Exceptional C++ (Herb Sutter) Presented as a set of puzzles, this has one of the best and thorough discussions of the proper resource management and exception safety in C++ through Resource Acquisition is Initialization (RAII) in addition to in-depth coverage of a variety of other topics including the pimpl idiom, name lookup, good class design, and the C++ memory model. [Review]

  • More Exceptional C++ (Herb Sutter) Covers additional exception safety topics not covered in Exceptional C++, in addition to discussion of effective object-oriented programming in C++ and correct use of the STL. [Review]

  • Exceptional C++ Style (Herb Sutter) Discusses generic programming, optimization, and resource management; this book also has an excellent exposition of how to write modular code in C++ by using non-member functions and the single responsibility principle. [Review]

  • C++ Coding Standards (Herb Sutter and Andrei Alexandrescu) “Coding standards” here doesn't mean “how many spaces should I indent my code?” This book contains 101 best practices, idioms, and common pitfalls that can help you to write correct, understandable, and efficient C++ code. [Review]

  • C++ Templates: The Complete Guide (David Vandevoorde and Nicolai M. Josuttis) This is the book about templates as they existed before C++11. It covers everything from the very basics to some of the most advanced template metaprogramming and explains every detail of how templates work (both conceptually and at how they are implemented) and discusses many common pitfalls. Has excellent summaries of the One Definition Rule (ODR) and overload resolution in the appendices. A second edition covering C++11, C++14 and C++17 has been already published. [Review]

  • C++ 17 - The Complete Guide (Nicolai M. Josuttis) This book describes all the new features introduced in the C++17 Standard covering everything from the simple ones like 'Inline Variables', 'constexpr if' all the way up to 'Polymorphic Memory Resources' and 'New and Delete with overaligned Data'. [Review]

  • C++ in Action (Bartosz Milewski). This book explains C++ and its features by building an application from ground up. [Review]

  • Functional Programming in C++ (Ivan Cukic). This book introduces functional programming techniques to modern C++ (C++11 and later). A very nice read for those who want to apply functional programming paradigms to C++.

  • Professional C++ (Marc Gregoire, 5th Edition - Feb 2021) Provides a comprehensive and detailed tour of the C++ language implementation replete with professional tips and concise but informative in-text examples, emphasizing C++20 features. Uses C++20 features, such as modules and std::format throughout all examples.


Advanced

  • Modern C++ Design (Andrei Alexandrescu) A groundbreaking book on advanced generic programming techniques. Introduces policy-based design, type lists, and fundamental generic programming idioms then explains how many useful design patterns (including small object allocators, functors, factories, visitors, and multi-methods) can be implemented efficiently, modularly, and cleanly using generic programming. [Review]

  • C++ Template Metaprogramming (David Abrahams and Aleksey Gurtovoy)

  • C++ Concurrency In Action (Anthony Williams) A book covering C++11 concurrency support including the thread library, the atomics library, the C++ memory model, locks and mutexes, as well as issues of designing and debugging multithreaded applications. A second edition covering C++14 and C++17 has been already published. [Review]

  • Advanced C++ Metaprogramming (Davide Di Gennaro) A pre-C++11 manual of TMP techniques, focused more on practice than theory. There are a ton of snippets in this book, some of which are made obsolete by type traits, but the techniques, are nonetheless useful to know. If you can put up with the quirky formatting/editing, it is easier to read than Alexandrescu, and arguably, more rewarding. For more experienced developers, there is a good chance that you may pick up something about a dark corner of C++ (a quirk) that usually only comes about through extensive experience.


Reference Style - All Levels

  • The C++ Programming Language (Bjarne Stroustrup) (updated for C++11) The classic introduction to C++ by its creator. Written to parallel the classic K&R, this indeed reads very much like it and covers just about everything from the core language to the standard library, to programming paradigms to the language's philosophy. [Review] Note: All releases of the C++ standard are tracked in the question "Where do I find the current C or C++ standard documents?".

  • C++ Standard Library Tutorial and Reference (Nicolai Josuttis) (updated for C++11) The introduction and reference for the C++ Standard Library. The second edition (released on April 9, 2012) covers C++11. [Review]

  • The C++ IO Streams and Locales (Angelika Langer and Klaus Kreft) There's very little to say about this book except that, if you want to know anything about streams and locales, then this is the one place to find definitive answers. [Review]

C++11/14/17/… References:

  • The C++11/14/17 Standard (INCITS/ISO/IEC 14882:2011/2014/2017) This, of course, is the final arbiter of all that is or isn't C++. Be aware, however, that it is intended purely as a reference for experienced users willing to devote considerable time and effort to its understanding. The C++17 standard is released in electronic form for 198 Swiss Francs.

  • The C++17 standard is available, but seemingly not in an economical form – directly from the ISO it costs 198 Swiss Francs (about $200 US). For most people, the final draft before standardization is more than adequate (and free). Many will prefer an even newer draft, documenting new features that are likely to be included in C++20.

  • Overview of the New C++ (C++11/14) (PDF only) (Scott Meyers) (updated for C++14) These are the presentation materials (slides and some lecture notes) of a three-day training course offered by Scott Meyers, who's a highly respected author on C++. Even though the list of items is short, the quality is high.

  • The C++ Core Guidelines (C++11/14/17/…) (edited by Bjarne Stroustrup and Herb Sutter) is an evolving online document consisting of a set of guidelines for using modern C++ well. The guidelines are focused on relatively higher-level issues, such as interfaces, resource management, memory management and concurrency affecting application architecture and library design. The project was announced at CppCon'15 by Bjarne Stroustrup and others and welcomes contributions from the community. Most guidelines are supplemented with a rationale and examples as well as discussions of possible tool support. Many rules are designed specifically to be automatically checkable by static analysis tools.

  • The C++ Super-FAQ (Marshall Cline, Bjarne Stroustrup and others) is an effort by the Standard C++ Foundation to unify the C++ FAQs previously maintained individually by Marshall Cline and Bjarne Stroustrup and also incorporating new contributions. The items mostly address issues at an intermediate level and are often written with a humorous tone. Not all items might be fully up to date with the latest edition of the C++ standard yet.

  • cppreference.com (C++03/11/14/17/…) (initiated by Nate Kohl) is a wiki that summarizes the basic core-language features and has extensive documentation of the C++ standard library. The documentation is very precise but is easier to read than the official standard document and provides better navigation due to its wiki nature. The project documents all versions of the C++ standard and the site allows filtering the display for a specific version. The project was presented by Nate Kohl at CppCon'14.


Classics / Older

Note: Some information contained within these books may not be up-to-date or no longer considered best practice.

  • The Design and Evolution of C++ (Bjarne Stroustrup) If you want to know why the language is the way it is, this book is where you find answers. This covers everything before the standardization of C++.

  • Ruminations on C++ - (Andrew Koenig and Barbara Moo) [Review]

  • Advanced C++ Programming Styles and Idioms (James Coplien) A predecessor of the pattern movement, it describes many C++-specific “idioms”. It's certainly a very good book and might still be worth a read if you can spare the time, but quite old and not up-to-date with current C++.

  • Large Scale C++ Software Design (John Lakos) Lakos explains techniques to manage very big C++ software projects. Certainly, a good read, if it only was up to date. It was written long before C++ 98 and misses on many features (e.g. namespaces) important for large-scale projects. If you need to work in a big C++ software project, you might want to read it, although you need to take more than a grain of salt with it. The first volume of a new edition is released in 2019.

  • Inside the C++ Object Model (Stanley Lippman) If you want to know how virtual member functions are commonly implemented and how base objects are commonly laid out in memory in a multi-inheritance scenario, and how all this affects performance, this is where you will find thorough discussions of such topics.

  • The Annotated C++ Reference Manual (Bjarne Stroustrup, Margaret A. Ellis) This book is quite outdated in the fact that it explores the 1989 C++ 2.0 version - Templates, exceptions, namespaces and new casts were not yet introduced. Saying that however, this book goes through the entire C++ standard of the time explaining the rationale, the possible implementations, and features of the language. This is not a book to learn programming principles and patterns on C++, but to understand every aspect of the C++ language.

  • Thinking in C++ (Bruce Eckel, 2nd Edition, 2000). Two volumes; is a tutorial style free set of intro level books. Downloads: vol 1, vol 2. Unfortunately they're marred by a number of trivial errors (e.g. maintaining that temporaries are automatically const), with no official errata list. A partial 3rd party errata list is available at http://www.computersciencelab.com/Eckel.htm, but it is apparently not maintained.

  • Scientific and Engineering C++: An Introduction to Advanced Techniques and Examples (John Barton and Lee Nackman) It is a comprehensive and very detailed book that tried to explain and make use of all the features available in C++, in the context of numerical methods. It introduced at the time several new techniques, such as the Curiously Recurring Template Pattern (CRTP, also called Barton-Nackman trick). It pioneered several techniques such as dimensional analysis and automatic differentiation. It came with a lot of compilable and useful code, ranging from an expression parser to a Lapack wrapper. The code is still available online. Unfortunately, the books have become somewhat outdated in the style and C++ features, however, it was an incredible tour-de-force at the time (1994, pre-STL). The chapters on dynamics inheritance are a bit complicated to understand and not very useful. An updated version of this classic book that includes move semantics and the lessons learned from the STL would be very nice.

How do I get ruby to print a full backtrace instead of a truncated one?

I was getting these errors when trying to load my test environment (via rake test or autotest) and the IRB suggestions didn't help. I ended up wrapping my entire test/test_helper.rb in a begin/rescue block and that fixed things up.

begin
  class ActiveSupport::TestCase
    #awesome stuff
  end
rescue => e
  puts e.backtrace
end

Turning off hibernate logging console output

The first thing to do is to figure out which logging framework is actually used.

Many frameworks are already covered by other authors above. In case you are using Logback you can solve the problem by adding this logback.xml to your classpath:

<?xml version="1.0" encoding="UTF-8"?>
<configuration>
    <logger name="org.hibernate" level="WARN"/>
</configuration>

Further information: Logback Manual-Configuration

How to access the last value in a vector?

To answer this not from an aesthetical but performance-oriented point of view, I've put all of the above suggestions through a benchmark. To be precise, I've considered the suggestions

  • x[length(x)]
  • mylast(x), where mylast is a C++ function implemented through Rcpp,
  • tail(x, n=1)
  • dplyr::last(x)
  • x[end(x)[1]]]
  • rev(x)[1]

and applied them to random vectors of various sizes (10^3, 10^4, 10^5, 10^6, and 10^7). Before we look at the numbers, I think it should be clear that anything that becomes noticeably slower with greater input size (i.e., anything that is not O(1)) is not an option. Here's the code that I used:

Rcpp::cppFunction('double mylast(NumericVector x) { int n = x.size(); return x[n-1]; }')
options(width=100)
for (n in c(1e3,1e4,1e5,1e6,1e7)) {
  x <- runif(n);
  print(microbenchmark::microbenchmark(x[length(x)],
                                       mylast(x),
                                       tail(x, n=1),
                                       dplyr::last(x),
                                       x[end(x)[1]],
                                       rev(x)[1]))}

It gives me

Unit: nanoseconds
           expr   min      lq     mean  median      uq   max neval
   x[length(x)]   171   291.5   388.91   337.5   390.0  3233   100
      mylast(x)  1291  1832.0  2329.11  2063.0  2276.0 19053   100
 tail(x, n = 1)  7718  9589.5 11236.27 10683.0 12149.0 32711   100
 dplyr::last(x) 16341 19049.5 22080.23 21673.0 23485.5 70047   100
   x[end(x)[1]]  7688 10434.0 13288.05 11889.5 13166.5 78536   100
      rev(x)[1]  7829  8951.5 10995.59  9883.0 10890.0 45763   100
Unit: nanoseconds
           expr   min      lq     mean  median      uq    max neval
   x[length(x)]   204   323.0   475.76   386.5   459.5   6029   100
      mylast(x)  1469  2102.5  2708.50  2462.0  2995.0   9723   100
 tail(x, n = 1)  7671  9504.5 12470.82 10986.5 12748.0  62320   100
 dplyr::last(x) 15703 19933.5 26352.66 22469.5 25356.5 126314   100
   x[end(x)[1]] 13766 18800.5 27137.17 21677.5 26207.5  95982   100
      rev(x)[1] 52785 58624.0 78640.93 60213.0 72778.0 851113   100
Unit: nanoseconds
           expr     min        lq       mean    median        uq     max neval
   x[length(x)]     214     346.0     583.40     529.5     720.0    1512   100
      mylast(x)    1393    2126.0    4872.60    4905.5    7338.0    9806   100
 tail(x, n = 1)    8343   10384.0   19558.05   18121.0   25417.0   69608   100
 dplyr::last(x)   16065   22960.0   36671.13   37212.0   48071.5   75946   100
   x[end(x)[1]]  360176  404965.5  432528.84  424798.0  450996.0  710501   100
      rev(x)[1] 1060547 1140149.0 1189297.38 1180997.5 1225849.0 1383479   100
Unit: nanoseconds
           expr     min        lq        mean    median         uq      max neval
   x[length(x)]     327     584.0     1150.75     996.5     1652.5     3974   100
      mylast(x)    2060    3128.5     7541.51    8899.0     9958.0    16175   100
 tail(x, n = 1)   10484   16936.0    30250.11   34030.0    39355.0    52689   100
 dplyr::last(x)   19133   47444.5    55280.09   61205.5    66312.5   105851   100
   x[end(x)[1]] 1110956 2298408.0  3670360.45 2334753.0  4475915.0 19235341   100
      rev(x)[1] 6536063 7969103.0 11004418.46 9973664.5 12340089.5 28447454   100
Unit: nanoseconds
           expr      min         lq         mean      median          uq       max neval
   x[length(x)]      327      722.0      1644.16      1133.5      2055.5     13724   100
      mylast(x)     1962     3727.5      9578.21      9951.5     12887.5     41773   100
 tail(x, n = 1)     9829    21038.0     36623.67     43710.0     48883.0     66289   100
 dplyr::last(x)    21832    35269.0     60523.40     63726.0     75539.5    200064   100
   x[end(x)[1]] 21008128 23004594.5  37356132.43  30006737.0  47839917.0 105430564   100
      rev(x)[1] 74317382 92985054.0 108618154.55 102328667.5 112443834.0 187925942   100

This immediately rules out anything involving rev or end since they're clearly not O(1) (and the resulting expressions are evaluated in a non-lazy fashion). tail and dplyr::last are not far from being O(1) but they're also considerably slower than mylast(x) and x[length(x)]. Since mylast(x) is slower than x[length(x)] and provides no benefits (rather, it's custom and does not handle an empty vector gracefully), I think the answer is clear: Please use x[length(x)].

Best implementation for Key Value Pair Data Structure?

One possible thing you could do is use the Dictionary object straight out of the box and then just extend it with your own modifications:

public class TokenTree : Dictionary<string, string>
{
    public IDictionary<string, string> SubPairs;
}

This gives you the advantage of not having to enforce the rules of IDictionary for your Key (e.g., key uniqueness, etc).

And yup you got the concept of the constructor right :)

Where can I find WcfTestClient.exe (part of Visual Studio)

The prerequisite to have WcfTestClient is to have Windows Communication Foundation component installed. If WcfTestClient is missing, install it by modifying Visual Studio:

Control Panel > Apps & Features > Visual Studio (your version)

In Visual Studio Installer, click on Modify, choose Individual components tab and then select (see below screenshot):

? Windows Communication Foundation

Click on Modify and voilà, application will be on your disk.

Visual Studio Installer

If you want to use WcfTestClient with no Visual Studio, see answer(s) on: How can the Wcf Test Client be used without Visual Studio?

Create zip file and ignore directory structure

Somewhat related - I was looking for a solution to do the same for directories. Unfortunately the -j option does not work for this :(

Here is a good solution on how to get it done: https://superuser.com/questions/119649/avoid-unwanted-path-in-zip-file

How do I rename a file using VBScript?

From what I understand, your context is to download from ALM. In this case, ALM saves the files under: C:/Users/user/AppData/Local/Temp/TD_80/ALM_VERSION/random_string/Attach/artefact_type/ID

where :

ALM_VERSION is the version of your alm installation, e.g 12.53.2.0_952

artefact_type is the type of the artefact, e.g : REQ

ID is the ID of the artefact

Herebelow a code sample which connects to an instance of ALM, domain 'DEFAUT', project 'MY_PROJECT', gets all the attachments from a REQ with id 6 and saves them in c:/tmp. It's ruby code, but it's easy to transcribe to VBSctript

require 'win32ole'
require 'fileutils'

# login to ALM and domain/project 
alm_server = ENV['CURRRENT_ALM_SERVER']
tdc = WIN32OLE.new('TDApiOle80.TDConnection')
tdc.InitConnectionEx(alm_server)
username, password = ENV['ALM_CREDENTIALS'].split(':')
tdc.Login(username, password)
tdc.Connect('DEFAULT', 'MY_PROJECT')

# get a handle for the Requirements 
reqFact = tdc.ReqFactory

# get Requirement with ID=6
req = reqFact.item(6)

# get a handle for the attachment of REQ 
att = req.Attachments

# get a handle for the list of attachements
attList = att.NewList("")

thePath= 'c:/tmp'

# for each attachment:
attList.each do |el|
  clientPath = nil

  # download the attachment to its default location
  el.Load true, clientPath

  baseName = File.basename(el.FileName)
  dirName = File.dirname(el.FileName)
  puts "file downloaded as : #{baseName}\n in Folder #{dirName}"  
  FileUtils.mkdir_p thePath
  puts "now moving #{baseName} to #{thePath}"  
  FileUtils.mv el.FileName, thePath
end

The output:

=> file downloaded as : REQ_6_20191112_143346.png

=> in Folder C:\Users\user\AppData\Local\Temp\TD_80\12.53.2.0_952\e68ab622\Attach\REQ\6

=> now moving REQ_6_20191112_143346.png to c:/tmp

ORDER BY the IN value list

On researching this some more I found this solution:

SELECT * FROM "comments" WHERE ("comments"."id" IN (1,3,2,4)) 
ORDER BY CASE "comments"."id"
WHEN 1 THEN 1
WHEN 3 THEN 2
WHEN 2 THEN 3
WHEN 4 THEN 4
END

However this seems rather verbose and might have performance issues with large datasets. Can anyone comment on these issues?

How to select following sibling/xml tag using xpath

For completeness - adding to accepted answer above - in case you are interested in any sibling regardless of the element type you can use variation:

following-sibling::*

How to extract public key using OpenSSL?

For AWS importing an existing public key,

  1. Export from the .pem doing this... (on linux)

    openssl rsa -in ./AWSGeneratedKey.pem -pubout -out PublicKey.pub
    

This will produce a file which if you open in a text editor looking something like this...

-----BEGIN PUBLIC KEY-----
MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAn/8y3uYCQxSXZ58OYceG
A4uPdGHZXDYOQR11xcHTrH13jJEzdkYZG8irtyG+m3Jb6f9F8WkmTZxl+4YtkJdN
9WyrKhxq4Vbt42BthadX3Ty/pKkJ81Qn8KjxWoL+SMaCGFzRlfWsFju9Q5C7+aTj
eEKyFujH5bUTGX87nULRfg67tmtxBlT8WWWtFe2O/wedBTGGQxXMpwh4ObjLl3Qh
bfwxlBbh2N4471TyrErv04lbNecGaQqYxGrY8Ot3l2V2fXCzghAQg26Hc4dR2wyA
PPgWq78db+gU3QsePeo2Ki5sonkcyQQQlCkL35Asbv8khvk90gist4kijPnVBCuv
cwIDAQAB
-----END PUBLIC KEY-----
  1. However AWS will NOT accept this file.

    You have to strip off the -----BEGIN PUBLIC KEY----- and -----END PUBLIC KEY----- from the file. Save it and import and it should work in AWS.

npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for [email protected]

This still appears to be an issue, causing package installations to be aborted with warnings about optional packages no being installed because of "Unsupported platform".

The problem relates to the "shrinkwrap" or package-lock.json which gets persisted after every package manager execution. Subsequent attempts keep failing as this file is referenced instead of package.json.

Adding these options to the npm install command should allow packages to install again.

   --no-optional argument will prevent optional dependencies from being installed.

   --no-shrinkwrap argument, which will ignore an available package lock or
                   shrinkwrap file and use the package.json instead.

   --no-package-lock argument will prevent npm from creating a package-lock.json file.

The complete command looks like this:

    npm install --no-optional --no-shrinkwrap --no-package-lock

nJoy!

MSVCP120d.dll missing

I downloaded msvcr120d.dll and msvcp120d.dll for 32-bit version and then, I put them into Debug folder of my project. It worked well. (My computer is 64-bit version)

How do I clear my Jenkins/Hudson build history?

Another easy way to clean builds is by adding the Discard Old Plugin at the end of your jobs. Set a maximum number of builds to save and then run the job again:

https://wiki.jenkins-ci.org/display/JENKINS/Discard+Old+Build+plugin

How to use sed to remove the last n lines of a file

This will remove the last 3 lines from file:

for i in $(seq 1 3); do sed -i '$d' file; done;

jQuery: If this HREF contains

use this

$("a").each(function () {
    var href=$(this).prop('href');
    if (href.indexOf('?') > -1) {
        alert("Contains questionmark");
    }
});

Android, landscape only orientation?

Just two steps needed:

  1. Apply setRequestedOrientation(ActivityInfo.SCREEN_ORIENTATION_LANDSCAPE); after setContentView().

  2. In the AndroidMainfest.xml, put this statement <activity android:name=".YOURCLASSNAME" android:screenOrientation="landscape" />

Hope it helps and happy coding :)

Where do I call the BatchNormalization function in Keras?

This thread is misleading. Tried commenting on Lucas Ramadan's answer, but I don't have the right privileges yet, so I'll just put this here.

Batch normalization works best after the activation function, and here or here is why: it was developed to prevent internal covariate shift. Internal covariate shift occurs when the distribution of the activations of a layer shifts significantly throughout training. Batch normalization is used so that the distribution of the inputs (and these inputs are literally the result of an activation function) to a specific layer doesn't change over time due to parameter updates from each batch (or at least, allows it to change in an advantageous way). It uses batch statistics to do the normalizing, and then uses the batch normalization parameters (gamma and beta in the original paper) "to make sure that the transformation inserted in the network can represent the identity transform" (quote from original paper). But the point is that we're trying to normalize the inputs to a layer, so it should always go immediately before the next layer in the network. Whether or not that's after an activation function is dependent on the architecture in question.

How to find a string inside a entire database?

Common Resource Grep (crgrep) will search for string matches in tables/columns by name or content and supports a number of DBs, including SQLServer, Oracle and others. Full wild-carding and other useful options.

It's opensource (I'm the author).

http://sourceforge.net/projects/crgrep/

Difference between & and && in Java?

'&' performs both tests, while '&&' only performs the 2nd test if the first is also true. This is known as shortcircuiting and may be considered as an optimization. This is especially useful in guarding against nullness(NullPointerException).

if( x != null && x.equals("*BINGO*") {
  then do something with x...
}

How to validate inputs dynamically created using ng-repeat, ng-show (angular)

Here an example of how I do that, I don't know if it is the best solution, but works perfectly.

First, code in HTML. Look at ng-class, it's calling hasError function. Look also to the input's name declaration. I use the $index to create different input names.

<div data-ng-repeat="tipo in currentObject.Tipo"
    ng-class="{'has-error': hasError(planForm, 'TipoM', 'required', $index) || hasError(planForm, 'TipoM', 'maxlength', $index)}">
    <input ng-model="tipo.Nombre" maxlength="100" required
        name="{{'TipoM' + $index}}"/>

And now, here is the hasError function:

$scope.hasError = function (form, elementName, errorType, index) {
           if (form == undefined
               || elementName == undefined
               || errorType == undefined
               || index == undefined)
               return false;

           var element = form[elementName + index];
           return (element != null && element.$error[errorType] && element.$touched);
       };

How to get the list of all printers in computer

Try this:

foreach (string printer in System.Drawing.Printing.PrinterSettings.InstalledPrinters)
{
    MessageBox.Show(printer);
}

Getting current directory in .NET web application

The current directory is a system-level feature; it returns the directory that the server was launched from. It has nothing to do with the website.

You want HttpRuntime.AppDomainAppPath.

If you're in an HTTP request, you can also call Server.MapPath("~/Whatever").

How can I rename a single column in a table at select?

if you are using sql server, use brackets or single quotes around alias name in a query you have in code.

Why does 2 mod 4 = 2?

When you divide 2 by 4, you get 0 with 2 left over or remaining. Modulo is just the remainder after dividing the number.

How to affect other elements when one element is hovered

Using the sibling selector is the general solution for styling other elements when hovering over a given one, but it works only if the other elements follow the given one in the DOM. What can we do when the other elements should actually be before the hovered one? Say we want to implement a signal bar rating widget like the one below:

Signal bar rating widget

This can actually be done easily using the CSS flexbox model, by setting flex-direction to reverse, so that the elements are displayed in the opposite order from the one they're in the DOM. The screenshot above is from such a widget, implemented with pure CSS.

Flexbox is very well supported by 95% of modern browsers.

_x000D_
_x000D_
.rating {_x000D_
  display: flex;_x000D_
  flex-direction: row-reverse;_x000D_
  width: 9rem;_x000D_
}_x000D_
.rating div {_x000D_
  flex: 1;_x000D_
  align-self: flex-end;_x000D_
  background-color: black;_x000D_
  border: 0.1rem solid white;_x000D_
}_x000D_
.rating div:hover {_x000D_
  background-color: lightblue;_x000D_
}_x000D_
.rating div[data-rating="1"] {_x000D_
  height: 5rem;_x000D_
}_x000D_
.rating div[data-rating="2"] {_x000D_
  height: 4rem;_x000D_
}_x000D_
.rating div[data-rating="3"] {_x000D_
  height: 3rem;_x000D_
}_x000D_
.rating div[data-rating="4"] {_x000D_
  height: 2rem;_x000D_
}_x000D_
.rating div[data-rating="5"] {_x000D_
  height: 1rem;_x000D_
}_x000D_
.rating div:hover ~ div {_x000D_
  background-color: lightblue;_x000D_
}
_x000D_
<div class="rating">_x000D_
  <div data-rating="1"></div>_x000D_
  <div data-rating="2"></div>_x000D_
  <div data-rating="3"></div>_x000D_
  <div data-rating="4"></div>_x000D_
  <div data-rating="5"></div>_x000D_
</div>
_x000D_
_x000D_
_x000D_

Convert date to another timezone in JavaScript

People familiar with the java 8 java.time package, or joda-time will probably love the new kid on the block: the js-joda library.

Install

npm install js-joda js-joda-timezone --save

Example

<script src="node_modules/js-joda/dist/js-joda.js"></script>
<script src="node_modules/js-joda-timezone/dist/js-joda-timezone.js"></script>
<script>
var dateStr = '2012/04/10 10:10:30 +0000';
JSJoda.use(JSJodaTimezone);
var j = JSJoda;
// https://js-joda.github.io/js-joda/esdoc/class/src/format/DateTimeFormatter.js~DateTimeFormatter.html#static-method-of-pattern
var zonedDateTime = j.ZonedDateTime.parse(dateStr, j.DateTimeFormatter.ofPattern('yyyy/MM/dd HH:mm:ss xx'));
var adjustedZonedDateTime = zonedDateTime.withZoneSameInstant(j.ZoneId.of('America/New_York'));
console.log(zonedDateTime.toString(), '=>', adjustedZonedDateTime.toString());
// 2012-04-10T10:10:30Z => 2012-04-10T06:10:30-04:00[America/New_York]
</script>

In true java nature, it's pretty verbose lol. But, being a ported java library, especially considering they ported 1800'ish test cases, it also probably works superbly accurately.

Chrono manipulation is hard. That's why many other libraries are buggy in edge cases. Moment.js seems to get timezones right, but the other js libs I've seen, including timezone-js, don't seem trustworthy.

Stored procedure - return identity as output parameter or scalar

Another option would be as the return value for the stored procedure (I don't suggest this though, as that's usually best for error values).

I've included it as both when it's inserting a single row in cases where the stored procedure was being consumed by both other SQL procedures and a front-end which couldn't work with OUTPUT parameters (IBATIS in .NET I believe):

CREATE PROCEDURE My_Insert
    @col1            VARCHAR(20),
    @new_identity    INT    OUTPUT
AS
BEGIN
    SET NOCOUNT ON

    INSERT INTO My_Table (col1)
    VALUES (@col1)

    SELECT @new_identity = SCOPE_IDENTITY()

    SELECT @new_identity AS id

    RETURN
END

The output parameter is easier to work with in T-SQL when calling from other stored procedures IMO, but some programming languages have poor or no support for output parameters and work better with result sets.

How to get value at a specific index of array In JavaScript?

shift can be used in places where you want to get the first element (index=0) of an array and chain with other array methods.

example:

const comps = [{}, {}, {}]
const specComp = comps
                  .map(fn1)
                  .filter(fn2)
                  .shift()

Remember shift mutates the array, which is very different from accessing via an indexer.

CSS show div background image on top of other contained elements

If you are using the background image for the rounded corners then I would rather increase the padding style of the main div to give enough room for the rounded corners of the background image to be visible.

Try increasing the padding of the main div style:

#mainWrapperDivWithBGImage 
{   
    background: url("myImageWithRoundedCorners.jpg") no-repeat scroll 0 0 transparent;   
    height: 248px;   
    margin: 0;   
    overflow: hidden;   
    padding: 10px 10px;   
    width: 996px; 
}

P.S: I assume the rounded corners have a radius of 10px.

How do I pass environment variables to Docker containers?

If you have the environment variables in an env.sh locally and want to set it up when the container starts, you could try

COPY env.sh /env.sh
COPY <filename>.jar /<filename>.jar
ENTRYPOINT ["/bin/bash" , "-c", "source /env.sh && printenv && java -jar /<filename>.jar"]

This command would start the container with a bash shell (I want a bash shell since source is a bash command), sources the env.sh file(which sets the environment variables) and executes the jar file.

The env.sh looks like this,

#!/bin/bash
export FOO="BAR"
export DB_NAME="DATABASE_NAME"

I added the printenv command only to test that actual source command works. You should probably remove it when you confirm the source command works fine or the environment variables would appear in your docker logs.

How to set a fixed width column with CSS flexbox

You should use the flex or flex-basis property rather than width. Read more on MDN.

.flexbox .red {
  flex: 0 0 25em;
}

The flex CSS property is a shorthand property specifying the ability of a flex item to alter its dimensions to fill available space. It contains:

flex-grow: 0;     /* do not grow   - initial value: 0 */
flex-shrink: 0;   /* do not shrink - initial value: 1 */
flex-basis: 25em; /* width/height  - initial value: auto */

A simple demo shows how to set the first column to 50px fixed width.

_x000D_
_x000D_
.flexbox {_x000D_
  display: flex;_x000D_
}_x000D_
.red {_x000D_
  background: red;_x000D_
  flex: 0 0 50px;_x000D_
}_x000D_
.green {_x000D_
  background: green;_x000D_
  flex: 1;_x000D_
}_x000D_
.blue {_x000D_
  background: blue;_x000D_
  flex: 1;_x000D_
}
_x000D_
<div class="flexbox">_x000D_
  <div class="red">1</div>_x000D_
  <div class="green">2</div>_x000D_
  <div class="blue">3</div>_x000D_
</div>
_x000D_
_x000D_
_x000D_


See the updated codepen based on your code.

line breaks in a textarea

From PHP using single quotes for the line break worked for me to support the line breaks when I pass that var to an HTML text area value attribute

PHP

foreach ($videoUrls as $key => $value) {
  $textAreaValue .= $value->video_url . '\n';
}

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

HTML/JS

$( document ).ready(function() {
    var text = "<?= htmlspecialchars($textAreaValue); ?>";
    document.getElementById("video_urls_textarea").value = text;
});

How to retrieve unique count of a field using Kibana + Elastic Search

Create "topN" query on "clientip" and then histogram with count on "clientip" and set "topN" query as source. Then you will see count of different ips per time.

How can I start InternetExplorerDriver using Selenium WebDriver

Enable protected mode for all zones You need to enable protected mode for all zones from Internet Options -> Security tab. To enable protected mode for all zones.

http://codebit.in/question/1/selenium-webdriver-java-code-launch-internet-explorer-brow

Where to declare variable in react js

Assuming that onMove is an event handler, it is likely that its context is something other than the instance of MyContainer, i.e. this points to something different.

You can manually bind the context of the function during the construction of the instance via Function.bind:

class MyContainer extends Component {
  constructor(props) {
    super(props);

    this.onMove = this.onMove.bind(this);

    this.test = "this is a test";
  }

  onMove() {
    console.log(this.test);
  }
}

Also, test !== testVariable.

Why is 22 the default port number for SFTP?

It's the default SSH port and SFTP is usually carried over an SSH tunnel.

Using AJAX to pass variable to PHP and retrieve those using AJAX again

In your PhP file there's going to be a variable called $_REQUEST and it contains an array with all the data send from Javascript to PhP using AJAX.

Try this: var_dump($_REQUEST); and check if you're receiving the values.

Jquery/Ajax Form Submission (enctype="multipart/form-data" ). Why does 'contentType:False' cause undefined index in PHP?

Please set your form action attribute as below it will solve your problem.

<form name="addProductForm" id="addProductForm" action="javascript:;" enctype="multipart/form-data" method="post" accept-charset="utf-8">

jQuery code:

$(document).ready(function () {
    $("#addProductForm").submit(function (event) {

        //disable the default form submission
        event.preventDefault();
        //grab all form data  
        var formData = $(this).serialize();

        $.ajax({
            url: 'addProduct.php',
            type: 'POST',
            data: formData,
            async: false,
            cache: false,
            contentType: false,
            processData: false,
            success: function () {
                alert('Form Submitted!');
            },
            error: function(){
                alert("error in ajax form submission");
            }
        });

        return false;
    });
});

Using node.js as a simple web server

You don't need express. You don't need connect. Node.js does http NATIVELY. All you need to do is return a file dependent on the request:

var http = require('http')
var url = require('url')
var fs = require('fs')

http.createServer(function (request, response) {
    var requestUrl = url.parse(request.url)    
    response.writeHead(200)
    fs.createReadStream(requestUrl.pathname).pipe(response)  // do NOT use fs's sync methods ANYWHERE on production (e.g readFileSync) 
}).listen(9615)    

A more full example that ensures requests can't access files underneath a base-directory, and does proper error handling:

var http = require('http')
var url = require('url')
var fs = require('fs')
var path = require('path')
var baseDirectory = __dirname   // or whatever base directory you want

var port = 9615

http.createServer(function (request, response) {
    try {
        var requestUrl = url.parse(request.url)

        // need to use path.normalize so people can't access directories underneath baseDirectory
        var fsPath = baseDirectory+path.normalize(requestUrl.pathname)

        var fileStream = fs.createReadStream(fsPath)
        fileStream.pipe(response)
        fileStream.on('open', function() {
             response.writeHead(200)
        })
        fileStream.on('error',function(e) {
             response.writeHead(404)     // assume the file doesn't exist
             response.end()
        })
   } catch(e) {
        response.writeHead(500)
        response.end()     // end the response so browsers don't hang
        console.log(e.stack)
   }
}).listen(port)

console.log("listening on port "+port)

Bitwise and in place of modulus operator

There is only a simple way to find modulo of 2^i numbers using bitwise.

There is an ingenious way to solve Mersenne cases as per the link such as n % 3, n % 7... There are special cases for n % 5, n % 255, and composite cases such as n % 6.

For cases 2^i, ( 2, 4, 8, 16 ...)

n % 2^i = n & (2^i - 1)

More complicated ones are hard to explain. Read up only if you are very curious.

"Expected an indented block" error?

I also experienced that for example:

This code doesnt work and get the intended block error.

class Foo(models.Model):
title = models.CharField(max_length=200)
body = models.TextField()
pub_date = models.DateTimeField('date published')
likes = models.IntegerField()

def __unicode__(self):
return self.title

However, when i press tab before typing return self.title statement, the code works.

class Foo(models.Model):
title = models.CharField(max_length=200)
body = models.TextField()
pub_date = models.DateTimeField('date published')
likes = models.IntegerField()

def __unicode__(self):
    return self.title

Hope, this will help others.

pthread function from a class

You'll have to give pthread_create a function that matches the signature it's looking for. What you're passing won't work.

You can implement whatever static function you like to do this, and it can reference an instance of c and execute what you want in the thread. pthread_create is designed to take not only a function pointer, but a pointer to "context". In this case you just pass it a pointer to an instance of c.

For instance:

static void* execute_print(void* ctx) {
    c* cptr = (c*)ctx;
    cptr->print();
    return NULL;
}


void func() {

    ...

    pthread_create(&t1, NULL, execute_print, &c[0]);

    ...
}

how to remove "," from a string in javascript

If you need a number greater than 999,999.00 you will have a problem.
These are only good for numbers less than 1 million, 1,000,000.
They only remove 1 or 2 commas.

Here the script that can remove up to 12 commas:

function uncomma(x) {
  var string1 = x;
  for (y = 0; y < 12; y++) {
    string1 = string1.replace(/\,/g, '');
  }
  return string1;
}

Modify that for loop if you need bigger numbers.

How to split a string literal across multiple lines in C / Objective-C?

GCC adds C++ multiline raw string literals as a C extension

C++11 has raw string literals as mentioned at: https://stackoverflow.com/a/44337236/895245

However, GCC also adds them as a C extension, you just have to use -std=gnu99 instead of -std=c99. E.g.:

main.c

#include <assert.h>
#include <string.h>

int main(void) {
    assert(strcmp(R"(
a
b
)", "\na\nb\n") == 0);
}

Compile and run:

gcc -o main -pedantic -std=gnu99 -Wall -Wextra main.c
./main

This can be used for example to insert multiline inline assembly into C code: How to write multiline inline assembly code in GCC C++?

Now you just have to lay back, and wait for it to be standardized on C20XY.

C++ was asked at: C++ multiline string literal

Tested on Ubuntu 16.04, GCC 6.4.0, binutils 2.26.1.

reCAPTCHA ERROR: Invalid domain for site key

There is another point must be noted before regenerating keys that resolve 90% issue.

for example your xampp directory is C:\xampp

and htdocs folder is C:\xampp\htdocs

we want to open page called: example-cap.html and page is showing error "invalid domain for site key"

USE YOUR LOCALHOST ADDRESS in browser address like:

localhost/example-cap.html

this will resolve your issue

DONOT USE ADDRESS c:\xampp\htdocs\example-cap.html this will generate error

How to compare two JSON have the same properties without order?

lodash will work, tested even for angular 5, http://jsfiddle.net/L5qrfx3x/

var remoteJSON = {"allowExternalMembers": "false", "whoCanJoin": 
   "CAN_REQUEST_TO_JOIN"};
var localJSON = {"whoCanJoin": "CAN_REQUEST_TO_JOIN", 
  "allowExternalMembers": "false"};

 if(_.isEqual(remoteJSON, localJSON)){
     //TODO
    }

it works, for installation in angular, follow this

Programmatically set image to UIImageView with Xcode 6.1/Swift

How about this;

myImageView.image=UIImage(named: "image_1")

where image_1 is within the assets folder as image_1.png.

This worked for me since i'm using a switch case to display an image slide.

Room - Schema export directory is not provided to the annotation processor so we cannot export the schema

In the build.gradle file for your app module, add this to the defaultConfig section (under the android section). This will write out the schema to a schemas subfolder of your project folder.

javaCompileOptions {
    annotationProcessorOptions {
        arguments += ["room.schemaLocation": "$projectDir/schemas".toString()]
    }
}

Like this:

// ...

android {
    
    // ... (compileSdkVersion, buildToolsVersion, etc)

    defaultConfig {

        // ... (applicationId, miSdkVersion, etc)
        
        javaCompileOptions {
            annotationProcessorOptions {
                arguments += ["room.schemaLocation": "$projectDir/schemas".toString()]
            }
        }
    }
   
    // ... (buildTypes, compileOptions, etc)

}

// ...

How can I use Ruby to colorize the text output to a terminal?

I found the answers above to be useful however didn't fit the bill if I wanted to colorize something like log output without using any third party libraries. The following solved the issue for me:

red = 31
green = 32
blue = 34

def color (color=blue)
  printf "\033[#{color}m";
  yield
  printf "\033[0m"
end

color { puts "this is blue" }
color(red) { logger.info "and this is red" }

I hope it helps!

How does one create an InputStream from a String?

Instead of CharSet.forName, using com.google.common.base.Charsets from Google's Guava (http://code.google.com/p/guava-libraries/wiki/StringsExplained#Charsets) is is slightly nicer:

InputStream is = new ByteArrayInputStream( myString.getBytes(Charsets.UTF_8) );

Which CharSet you use depends entirely on what you're going to do with the InputStream, of course.

How to get base url with jquery or javascript?

This will get base url

var baseurl = window.location.origin+window.location.pathname;

IN vs ANY operator in PostgreSQL

(Neither IN nor ANY is an "operator". A "construct" or "syntax element".)

Logically, quoting the manual:

IN is equivalent to = ANY.

But there are two syntax variants of IN and two variants of ANY. Details:

IN taking a set is equivalent to = ANY taking a set, as demonstrated here:

But the second variant of each is not equivalent to the other. The second variant of the ANY construct takes an array (must be an actual array type), while the second variant of IN takes a comma-separated list of values. This leads to different restrictions in passing values and can also lead to different query plans in special cases:

ANY is more versatile

The ANY construct is far more versatile, as it can be combined with various operators, not just =. Example:

SELECT 'foo' LIKE ANY('{FOO,bar,%oo%}');

For a big number of values, providing a set scales better for each:

Related:

Inversion / opposite / exclusion

"Find rows where id is in the given array":

SELECT * FROM tbl WHERE id = ANY (ARRAY[1, 2]);

Inversion: "Find rows where id is not in the array":

SELECT * FROM tbl WHERE id <> ALL (ARRAY[1, 2]);
SELECT * FROM tbl WHERE id <> ALL ('{1, 2}');  -- equivalent array literal
SELECT * FROM tbl WHERE NOT (id = ANY ('{1, 2}'));

All three equivalent. The first with array constructor, the other two with array literal. The data type can be derived from context unambiguously. Else, an explicit cast may be required, like '{1,2}'::int[].

Rows with id IS NULL do not pass either of these expressions. To include NULL values additionally:

SELECT * FROM tbl WHERE (id = ANY ('{1, 2}')) IS NOT TRUE;

How to sort a list of objects based on an attribute of the objects?

# To sort the list in place...
ut.sort(key=lambda x: x.count, reverse=True)

# To return a new list, use the sorted() built-in function...
newlist = sorted(ut, key=lambda x: x.count, reverse=True)

More on sorting by keys.

Setting up an MS-Access DB for multi-user access

i think Access is a best choice for your case. But you have to split database, see: http://accessblog.net/2005/07/how-to-split-database-into-be-and-fe.html

•How can we make sure that the write-user can make changes to the table data while other users use the data? Do the read-users put locks on tables? Does the write-user have to put locks on the table? Does Access do this for us or do we have to explicitly code this?

there are no read locks unless you put them explicitly. Just use "No Locks"

•Are there any common problems with "MS Access transactions" that we should be aware of?

should not be problems with 1-2 write users

•Can we work on forms, queries etc. while they are being used? How can we "program" without being in the way of the users?

if you split database - then no problem to work on FE design.

•Which settings in MS Access have an influence on how things are handled?

What do you mean?

•Our background is mostly in Oracle, where is Access different in handling multiple users? Is there such thing as "isolation levels" in Access?

no isolation levels in access. BTW, you can then later move data to oracle and keep access frontend, if you have lot of users and big database.

nginx error "conflicting server name" ignored

I assume that you're running a Linux, and you're using gEdit to edit your files. In the /etc/nginx/sites-enabled, it may have left a temp file e.g. default~ (watch the ~).

Depending on your editor, the file could be named .save or something like it. Just run $ ls -lah to see which files are unintended to be there and remove them (Thanks @Tisch for this).

Delete this file, and it will solve your problem.

Python script to convert from UTF-8 to ASCII

import codecs

 ...

fichier = codecs.open(filePath, "r", encoding="utf-8")

 ...

fichierTemp = codecs.open("tempASCII", "w", encoding="ascii", errors="ignore")
fichierTemp.write(contentOfFile)

 ...

Error:Execution failed for task ':ProjectName:mergeDebugResources'. > Crunching Cruncher *some file* failed, see logs

Sometimes, it can be caused by wrong naming for xml or resource file.

At least, for me, that problem was solved by changing name .

How do I format a number in Java?

From this thread, there are different ways to do this:

double r = 5.1234;
System.out.println(r); // r is 5.1234

int decimalPlaces = 2;
BigDecimal bd = new BigDecimal(r);

// setScale is immutable
bd = bd.setScale(decimalPlaces, BigDecimal.ROUND_HALF_UP);
r = bd.doubleValue();

System.out.println(r); // r is 5.12

f = (float) (Math.round(n*100.0f)/100.0f);

DecimalFormat df2 = new DecimalFormat( "#,###,###,##0.00" );
double dd = 100.2397;
double dd2dec = new Double(df2.format(dd)).doubleValue();

// The value of dd2dec will be 100.24

The DecimalFormat() seems to be the most dynamic way to do it, and it is also very easy to understand when reading others code.

PHP how to get value from array if key is in a variable

$value = ( array_key_exists($key, $array) && !empty($array[$key]) ) 
         ? $array[$key] 
         : 'non-existant or empty value key';

How to simulate a button click using code?

Just to clarify what moonlightcheese stated: To trigger a button click event through code in Android provide the following:

buttonName.performClick();

Java way to check if a string is palindrome

For the least lines of code and the simplest case

if(s.equals(new StringBuilder(s).reverse().toString())) // is a palindrome.

Issue in installing php7.2-mcrypt

@praneeth-nidarshan has covered mostly all the steps, except some:

  • Check if you have pear installed (or install):

$ sudo apt-get install php-pear

  • Install, if isn't already installed, php7.2-dev, in order to avoid the error:

sh: phpize: not found

ERROR: `phpize’ failed

$ sudo apt-get install php7.2-dev

  • Install mcrypt using pecl:

$ sudo pecl install mcrypt-1.0.1

  • Add the extention extension=mcrypt.so to your php.ini configuration file; if you don't know where it is, search with:

$ sudo php -i | grep 'Configuration File'

How do you see the entire command history in interactive Python?

If you want to write the history to a file:

import readline
readline.write_history_file('python_history.txt')

The help function gives:

Help on built-in function write_history_file in module readline:

write_history_file(...)
    write_history_file([filename]) -> None
    Save a readline history file.
    The default filename is ~/.history.

Dictionary returning a default value if the key does not exist

I know this is an old post and I do favor extension methods, but here's a simple class I use from time to time to handle dictionaries when I need default values.

I wish this were just part of the base Dictionary class.

public class DictionaryWithDefault<TKey, TValue> : Dictionary<TKey, TValue>
{
  TValue _default;
  public TValue DefaultValue {
    get { return _default; }
    set { _default = value; }
  }
  public DictionaryWithDefault() : base() { }
  public DictionaryWithDefault(TValue defaultValue) : base() {
    _default = defaultValue;
  }
  public new TValue this[TKey key]
  {
    get { 
      TValue t;
      return base.TryGetValue(key, out t) ? t : _default;
    }
    set { base[key] = value; }
  }
}

Beware, however. By subclassing and using new (since override is not available on the native Dictionary type), if a DictionaryWithDefault object is upcast to a plain Dictionary, calling the indexer will use the base Dictionary implementation (throwing an exception if missing) rather than the subclass's implementation.

Maven error :Perhaps you are running on a JRE rather than a JDK?

Check if /usr/bin has 'javac'. If not you have installed JRE & have to install jdk dev version like "java-1.8.0-openjdk-devel.x86_64"

R: Plotting a 3D surface from x, y, z

If your x and y coords are not on a grid then you need to interpolate your x,y,z surface onto one. You can do this with kriging using any of the geostatistics packages (geoR, gstat, others) or simpler techniques such as inverse distance weighting.

I'm guessing the 'interp' function you mention is from the akima package. Note that the output matrix is independent of the size of your input points. You could have 10000 points in your input and interpolate that onto a 10x10 grid if you wanted. By default akima::interp does it onto a 40x40 grid:

require(akima)
require(rgl)

x = runif(1000)
y = runif(1000)
z = rnorm(1000)
s = interp(x,y,z)
> dim(s$z)
[1] 40 40
surface3d(s$x,s$y,s$z)

That'll look spiky and rubbish because its random data. Hopefully your data isnt!

How to import JSON File into a TypeScript file?

Here is complete answer for Angular 6+ based on @ryanrain answer:

From angular-cli doc, json can be considered as assets and accessed from standard import without use of ajax request.

Let's suppose you add your json files into "your-json-dir" directory:

  1. add "your-json-dir" into angular.json file (:

    "assets": [ "src/assets", "src/your-json-dir" ]

  2. create or edit typings.d.ts file (at your project root) and add the following content:

    declare module "*.json" { const value: any; export default value; }

    This will allow import of ".json" modules without typescript error.

  3. in your controller/service/anything else file, simply import the file by using this relative path:

    import * as myJson from 'your-json-dir/your-json-file.json';

Is Eclipse the best IDE for Java?

This is subjective... I find it to be a good tool.

It depends what kind of development you're doing - for EJB stuff, many folk would favour Netbeans. It also depends how much you want to spend - I assume you're talking about free IDEs?

Remove characters from C# string

Sounds like an ideal application for RegEx -- an engine designed for fast text manipulation. In this case:

Regex.Replace("He\"ll,o Wo'r.ld", "[@,\\.\";'\\\\]", string.Empty)

max(length(field)) in mysql

In case you need both max and min from same table:

    select * from (
(select city, length(city) as maxlen from station
order by maxlen desc limit 1)
union
(select city, length(city) as minlen from station
order by minlen,city limit 1))a;

DISTINCT for only one column

You can over that by using GROUP BY like this:

SELECT ID, Email, ProductName, ProductModel
FROM Products
GROUP BY Email

How do you get the currently selected <option> in a <select> via JavaScript?

var payeeCountry = document.getElementById( "payeeCountry" );
alert( payeeCountry.options[ yourSelect.selectedIndex ].value );

HTML Input Box - Disable

<input type="text" required="true" value="" readonly="true">

This will make a text box in readonly mode, might be helpful in generating passwords and datepickers.

Download a file from NodeJS Server using Express

In Express 4.x, there is an attachment() method to Response:

res.attachment();
// Content-Disposition: attachment

res.attachment('path/to/logo.png');
// Content-Disposition: attachment; filename="logo.png"
// Content-Type: image/png

Cluster analysis in R: determine the optimal number of clusters

If your question is how can I determine how many clusters are appropriate for a kmeans analysis of my data?, then here are some options. The wikipedia article on determining numbers of clusters has a good review of some of these methods.

First, some reproducible data (the data in the Q are... unclear to me):

n = 100
g = 6 
set.seed(g)
d <- data.frame(x = unlist(lapply(1:g, function(i) rnorm(n/g, runif(1)*i^2))), 
                y = unlist(lapply(1:g, function(i) rnorm(n/g, runif(1)*i^2))))
plot(d)

enter image description here

One. Look for a bend or elbow in the sum of squared error (SSE) scree plot. See http://www.statmethods.net/advstats/cluster.html & http://www.mattpeeples.net/kmeans.html for more. The location of the elbow in the resulting plot suggests a suitable number of clusters for the kmeans:

mydata <- d
wss <- (nrow(mydata)-1)*sum(apply(mydata,2,var))
  for (i in 2:15) wss[i] <- sum(kmeans(mydata,
                                       centers=i)$withinss)
plot(1:15, wss, type="b", xlab="Number of Clusters",
     ylab="Within groups sum of squares")

We might conclude that 4 clusters would be indicated by this method: enter image description here

Two. You can do partitioning around medoids to estimate the number of clusters using the pamk function in the fpc package.

library(fpc)
pamk.best <- pamk(d)
cat("number of clusters estimated by optimum average silhouette width:", pamk.best$nc, "\n")
plot(pam(d, pamk.best$nc))

enter image description here enter image description here

# we could also do:
library(fpc)
asw <- numeric(20)
for (k in 2:20)
  asw[[k]] <- pam(d, k) $ silinfo $ avg.width
k.best <- which.max(asw)
cat("silhouette-optimal number of clusters:", k.best, "\n")
# still 4

Three. Calinsky criterion: Another approach to diagnosing how many clusters suit the data. In this case we try 1 to 10 groups.

require(vegan)
fit <- cascadeKM(scale(d, center = TRUE,  scale = TRUE), 1, 10, iter = 1000)
plot(fit, sortg = TRUE, grpmts.plot = TRUE)
calinski.best <- as.numeric(which.max(fit$results[2,]))
cat("Calinski criterion optimal number of clusters:", calinski.best, "\n")
# 5 clusters!

enter image description here

Four. Determine the optimal model and number of clusters according to the Bayesian Information Criterion for expectation-maximization, initialized by hierarchical clustering for parameterized Gaussian mixture models

# See http://www.jstatsoft.org/v18/i06/paper
# http://www.stat.washington.edu/research/reports/2006/tr504.pdf
#
library(mclust)
# Run the function to see how many clusters
# it finds to be optimal, set it to search for
# at least 1 model and up 20.
d_clust <- Mclust(as.matrix(d), G=1:20)
m.best <- dim(d_clust$z)[2]
cat("model-based optimal number of clusters:", m.best, "\n")
# 4 clusters
plot(d_clust)

enter image description here enter image description here enter image description here

Five. Affinity propagation (AP) clustering, see http://dx.doi.org/10.1126/science.1136800

library(apcluster)
d.apclus <- apcluster(negDistMat(r=2), d)
cat("affinity propogation optimal number of clusters:", length(d.apclus@clusters), "\n")
# 4
heatmap(d.apclus)
plot(d.apclus, d)

enter image description here enter image description here

Six. Gap Statistic for Estimating the Number of Clusters. See also some code for a nice graphical output. Trying 2-10 clusters here:

library(cluster)
clusGap(d, kmeans, 10, B = 100, verbose = interactive())

Clustering k = 1,2,..., K.max (= 10): .. done
Bootstrapping, b = 1,2,..., B (= 100)  [one "." per sample]:
.................................................. 50 
.................................................. 100 
Clustering Gap statistic ["clusGap"].
B=100 simulated reference sets, k = 1..10
 --> Number of clusters (method 'firstSEmax', SE.factor=1): 4
          logW   E.logW        gap     SE.sim
 [1,] 5.991701 5.970454 -0.0212471 0.04388506
 [2,] 5.152666 5.367256  0.2145907 0.04057451
 [3,] 4.557779 5.069601  0.5118225 0.03215540
 [4,] 3.928959 4.880453  0.9514943 0.04630399
 [5,] 3.789319 4.766903  0.9775842 0.04826191
 [6,] 3.747539 4.670100  0.9225607 0.03898850
 [7,] 3.582373 4.590136  1.0077628 0.04892236
 [8,] 3.528791 4.509247  0.9804556 0.04701930
 [9,] 3.442481 4.433200  0.9907197 0.04935647
[10,] 3.445291 4.369232  0.9239414 0.05055486

Here's the output from Edwin Chen's implementation of the gap statistic: enter image description here

Seven. You may also find it useful to explore your data with clustergrams to visualize cluster assignment, see http://www.r-statistics.com/2010/06/clustergram-visualization-and-diagnostics-for-cluster-analysis-r-code/ for more details.

Eight. The NbClust package provides 30 indices to determine the number of clusters in a dataset.

library(NbClust)
nb <- NbClust(d, diss=NULL, distance = "euclidean",
        method = "kmeans", min.nc=2, max.nc=15, 
        index = "alllong", alphaBeale = 0.1)
hist(nb$Best.nc[1,], breaks = max(na.omit(nb$Best.nc[1,])))
# Looks like 3 is the most frequently determined number of clusters
# and curiously, four clusters is not in the output at all!

enter image description here

If your question is how can I produce a dendrogram to visualize the results of my cluster analysis, then you should start with these: http://www.statmethods.net/advstats/cluster.html http://www.r-tutor.com/gpu-computing/clustering/hierarchical-cluster-analysis http://gastonsanchez.wordpress.com/2012/10/03/7-ways-to-plot-dendrograms-in-r/ And see here for more exotic methods: http://cran.r-project.org/web/views/Cluster.html

Here are a few examples:

d_dist <- dist(as.matrix(d))   # find distance matrix 
plot(hclust(d_dist))           # apply hirarchical clustering and plot

enter image description here

# a Bayesian clustering method, good for high-dimension data, more details:
# http://vahid.probstat.ca/paper/2012-bclust.pdf
install.packages("bclust")
library(bclust)
x <- as.matrix(d)
d.bclus <- bclust(x, transformed.par = c(0, -50, log(16), 0, 0, 0))
viplot(imp(d.bclus)$var); plot(d.bclus); ditplot(d.bclus)
dptplot(d.bclus, scale = 20, horizbar.plot = TRUE,varimp = imp(d.bclus)$var, horizbar.distance = 0, dendrogram.lwd = 2)
# I just include the dendrogram here

enter image description here

Also for high-dimension data is the pvclust library which calculates p-values for hierarchical clustering via multiscale bootstrap resampling. Here's the example from the documentation (wont work on such low dimensional data as in my example):

library(pvclust)
library(MASS)
data(Boston)
boston.pv <- pvclust(Boston)
plot(boston.pv)

enter image description here

Does any of that help?

get data from mysql database to use in javascript

You can't do it with only Javascript. You'll need some server-side code (PHP, in your case) that serves as a proxy between the DB and the client-side code.

Short circuit Array.forEach like calling break

If you want to keep your forEach syntax, this is a way to keep it efficient (although not as good as a regular for loop). Check immediately for a variable that knows if you want to break out of the loop.

This example uses a anonymous function for creating a function scope around the forEach which you need to store the done information.

_x000D_
_x000D_
(function(){_x000D_
    var element = document.getElementById('printed-result');_x000D_
    var done = false;_x000D_
    [1,2,3,4].forEach(function(item){_x000D_
        if(done){ return; }_x000D_
        var text = document.createTextNode(item);_x000D_
        element.appendChild(text);_x000D_
        if (item === 2){_x000D_
          done = true;_x000D_
          return;_x000D_
        }_x000D_
    });_x000D_
})();
_x000D_
<div id="printed-result"></div>
_x000D_
_x000D_
_x000D_

My two cents.

How can you detect the version of a browser?

I want to share this code I wrote for the issue I had to resolve. It was tested in most of the major browsers and works like a charm, for me!

It may seems that this code is very similar to the other answers but it modifyed so that I can use it insted of the browser object in jquery which missed for me recently, of course it is a combination from the above codes, with little improvements from my part I made:

(function($, ua){

var M = ua.match(/(opera|chrome|safari|firefox|msie|trident(?=\/))\/?\s*(\d+)/i) || [],
    tem, 
    res;

if(/trident/i.test(M[1])){
    tem = /\brv[ :]+(\d+)/g.exec(ua) || [];
    res = 'IE ' + (tem[1] || '');
}
else if(M[1] === 'Chrome'){
    tem = ua.match(/\b(OPR|Edge)\/(\d+)/);
    if(tem != null) 
        res = tem.slice(1).join(' ').replace('OPR', 'Opera');
    else
        res = [M[1], M[2]];
}
else {
    M = M[2]? [M[1], M[2]] : [navigator.appName, navigator.appVersion, '-?'];
    if((tem = ua.match(/version\/(\d+)/i)) != null) M = M.splice(1, 1, tem[1]);
    res = M;
}

res = typeof res === 'string'? res.split(' ') : res;

$.browser = {
    name: res[0],
    version: res[1],
    msie: /msie|ie/i.test(res[0]),
    firefox: /firefox/i.test(res[0]),
    opera: /opera/i.test(res[0]),
    chrome: /chrome/i.test(res[0]),
    edge: /edge/i.test(res[0])
}

})(typeof jQuery != 'undefined'? jQuery : window.$, navigator.userAgent);

 console.log($.browser.name, $.browser.version, $.browser.msie); 
// if IE 11 output is: IE 11 true

How to deserialize JS date using Jackson?

I found a work around but with this I'll need to annotate each date's setter throughout the project. Is there a way in which I can specify the format while creating the ObjectMapper?

Here's what I did:

public class CustomJsonDateDeserializer extends JsonDeserializer<Date>
{
    @Override
    public Date deserialize(JsonParser jsonParser,
            DeserializationContext deserializationContext) throws IOException, JsonProcessingException {

        SimpleDateFormat format = new SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ss");
        String date = jsonParser.getText();
        try {
            return format.parse(date);
        } catch (ParseException e) {
            throw new RuntimeException(e);
        }

    }

}

And annotated each Date field's setter method with this:

@JsonDeserialize(using = CustomJsonDateDeserializer.class)

How to get value of a div using javascript

To put it short 'value' is not an valid attribute of div. So it's absolutely correct to return undefined.

What you could do was something in the line of using one of the HTML5 attributes 'data-*'

<div id="demo" align="center"  data-value="1">

And the script would be:

var val = document.getElementById('demo').getAttribute('data-value');

This should work in most modern browsers Just remember to put your doctype as <!DOCTYPE html> to get it valid

Using SSH keys inside docker container

I ran into the same problem today and little bit modified version with previous posts I found this approach more useful to me

docker run -it -v ~/.ssh/id_rsa:/root/.my-key:ro image /bin/bash

(Note that readonly flag so container will not mess my ssh key in any case.)

Inside container I can now run:

ssh-agent bash -c "ssh-add ~/.my-key; git clone <gitrepourl> <target>"

So I don't get that Bad owner or permissions on /root/.ssh/.. error which was noted by @kross

ASP.NET set hiddenfield a value in Javascript

I will suggest you to use ClientID of HiddenField. first Register its client Id in any Javascript Variable from codebehind, then use it in clientside script. as:

.cs file code:

ClientScript.RegisterStartupScript(this.GetType(), "clientids", "var hdntxtbxTaksit=" + hdntxtbxTaksit.ClientID, true);

and then use following code in JS:

document.getElementById(hdntxtbxTaksit).value= ""; 

Twitter Bootstrap tabs not working: when I click on them nothing happens

I tried the codes from bootstrap's document as follows:

<!doctype html>
<html lang="en">
<head>
  <meta charset="utf-8">
  <title>tab demo</title>
  <link rel="stylesheet" href="bootstrap.css">
</head>
<body>
<div class="span9">
    <div class="tabbable"> <!-- Only required for left/right tabs -->
      <ul class="nav nav-tabs">
        <li class="active"><a href="#tab1" data-toggle="tab">Section 1</a></li>
        <li><a href="#tab2" data-toggle="tab">Section 2</a></li>
      </ul>
      <div class="tab-content">
        <div class="tab-pane active" id="tab1">
          <p>I'm in Section 1.</p>
        </div>
        <div class="tab-pane" id="tab2">
          <p>Howdy, I'm in Section 2.</p>
        </div>
      </div>
    </div>
</div> 
<script src="jquery.min.js"></script>
<script src="bootstrap.js"></script>
</body>
</html>

and it works. But when I switch these two <script src...>'s postion, it do not work. So please remember to load your jquery script before bootstrap.js.

How do I access my webcam in Python?

John Montgomery's, answer is great, but at least on Windows, it is missing the line

vc.release()

before

cv2.destroyWindow("preview")

Without it, the camera resource is locked, and can not be captured again before the python console is killed.

Adding image to JFrame

Here is a simple example of adding an image to a JFrame:

frame.add(new JLabel(new ImageIcon("Path/To/Your/Image.png")));

Monad in plain English? (For the OOP programmer with no FP background)

A monad is a data type that encapsulates a value, and to which, essentially, two operations can be applied:

  • return x creates a value of the monad type that encapsulates x
  • m >>= f (read it as "the bind operator") applies the function f to the value in the monad m

That's what a monad is. There are a few more technicalities, but basically those two operations define a monad. The real question is, "What a monad does?", and that depends on the monad — lists are monads, Maybes are monads, IO operations are monads. All that it means when we say those things are monads is that they have the monad interface of return and >>=.

Remove border from buttons

The usual trick is to make the image itself part of a link instead of a button. Then, you bind the "click" event with a custom handler.

Frameworks like Jquery-UI or Bootstrap does this out of the box. Using one of them may ease a lot the whole application conception by the way.

deleted object would be re-saved by cascade (remove deleted object from associations)

Some how all the above solutions did not worked in hibernate 5.2.10.Final.

But setting the map to null as below worked for me:

playlist.setPlaylistadMaps(null);

How to get javax.comm API?

Oracle Java Communications API Reference - http://www.oracle.com/technetwork/java/index-jsp-141752.html

Official 3.0 Download (Solarix, Linux) - http://www.oracle.com/technetwork/java/javasebusiness/downloads/java-archive-downloads-misc-419423.html

Unofficial 2.0 Download (All): http://www.java2s.com/Code/Jar/c/Downloadcomm20jar.htm

Unofficial 2.0 Download (Windows installer) - http://kishor15389.blogspot.hk/2011/05/how-to-install-java-communications.html

In order to ensure there is no compilation error, place the file on your classpath when compiling (-cp command-line option, or check your IDE documentation).

How to get diff between all files inside 2 folders that are on the web?

You urls are not in the same repository, so you can't do it with the svn diff command.

svn: 'http://svn.boost.org/svn/boost/sandbox/boost/extension' isn't in the same repository as 'http://cloudobserver.googlecode.com/svn'

Another way you could do it, is export each repos using svn export, and then use the diff command to compare the 2 directories you exported.

// Export repositories
svn export http://svn.boost.org/svn/boost/sandbox/boost/extension/ repos1
svn export http://cloudobserver.googlecode.com/svn/branches/v0.4/Boost.Extension.Tutorial/libs/boost/extension/ repos2

// Compare exported directories
diff repos1 repos2 > file.diff

Java for loop multiple variables

Instead of this : for(int a = 0, b = 1; a<cards.length-1; b=a+1; a++;){

It should be

for(int a = 0, b = 1; a<cards.length()-1; b=a+1, a++){
                                     ^         ^    ^  
                                     |         |    |  
                                     |         |    |  
            -------------------------------------------Note the changes
           |                    
           v                                                  |
   if(rank==cards.substring(a,b){                             |
-------------------------------------------------------------                                  
|
v
System.out.println(c); //capital S in system

Select row with most recent date per user

Based in @TMS answer, I like it because there's no need for subqueries but I think ommiting the 'OR' part will be sufficient and much simpler to understand and read.

SELECT t1.*
FROM lms_attendance AS t1
LEFT JOIN lms_attendance AS t2
  ON t1.user = t2.user 
        AND t1.time < t2.time
WHERE t2.user IS NULL

if you are not interested in rows with null times you can filter them in the WHERE clause:

SELECT t1.*
FROM lms_attendance AS t1
LEFT JOIN lms_attendance AS t2
  ON t1.user = t2.user 
        AND t1.time < t2.time
WHERE t2.user IS NULL and t1.time IS NOT NULL

Code for best fit straight line of a scatter plot in python

You can use numpy's polyfit. I use the following (you can safely remove the bit about coefficient of determination and error bounds, I just think it looks nice):

#!/usr/bin/python3

import numpy as np
import matplotlib.pyplot as plt
import csv

with open("example.csv", "r") as f:
    data = [row for row in csv.reader(f)]
    xd = [float(row[0]) for row in data]
    yd = [float(row[1]) for row in data]

# sort the data
reorder = sorted(range(len(xd)), key = lambda ii: xd[ii])
xd = [xd[ii] for ii in reorder]
yd = [yd[ii] for ii in reorder]

# make the scatter plot
plt.scatter(xd, yd, s=30, alpha=0.15, marker='o')

# determine best fit line
par = np.polyfit(xd, yd, 1, full=True)

slope=par[0][0]
intercept=par[0][1]
xl = [min(xd), max(xd)]
yl = [slope*xx + intercept  for xx in xl]

# coefficient of determination, plot text
variance = np.var(yd)
residuals = np.var([(slope*xx + intercept - yy)  for xx,yy in zip(xd,yd)])
Rsqr = np.round(1-residuals/variance, decimals=2)
plt.text(.9*max(xd)+.1*min(xd),.9*max(yd)+.1*min(yd),'$R^2 = %0.2f$'% Rsqr, fontsize=30)

plt.xlabel("X Description")
plt.ylabel("Y Description")

# error bounds
yerr = [abs(slope*xx + intercept - yy)  for xx,yy in zip(xd,yd)]
par = np.polyfit(xd, yerr, 2, full=True)

yerrUpper = [(xx*slope+intercept)+(par[0][0]*xx**2 + par[0][1]*xx + par[0][2]) for xx,yy in zip(xd,yd)]
yerrLower = [(xx*slope+intercept)-(par[0][0]*xx**2 + par[0][1]*xx + par[0][2]) for xx,yy in zip(xd,yd)]

plt.plot(xl, yl, '-r')
plt.plot(xd, yerrLower, '--r')
plt.plot(xd, yerrUpper, '--r')
plt.show()

How do I use Notepad++ (or other) with msysgit?

I used starikovs' solution. I started with a bash window and gave the commands

cd ~
touch .bashrc

Then I found the .bashrc file in windows explorer, opened it with notepad++ and added

PATH=$PATH:"C:\Program Files (x86)\Notepad++"

so that bash knows where to find Notepad++. (Having Notepad++ in the bash PATH is a useful thing on its own!) Then I pasted his line

git config --global core.editor "notepad++.exe -multiInst"

into a bash window. I started a new bash window for a git repository to test things with the command

git rebase -i HEAD~10

and the file opened in Notepad++ as hoped.

Set focus on <input> element

I'm going to weigh in on this (Angular 7 Solution)

input [appFocus]="focus"....
import {AfterViewInit, Directive, ElementRef, Input,} from '@angular/core';

@Directive({
  selector: 'input[appFocus]',
})
export class FocusDirective implements AfterViewInit {

  @Input('appFocus')
  private focused: boolean = false;

  constructor(public element: ElementRef<HTMLElement>) {
  }

  ngAfterViewInit(): void {
    // ExpressionChangedAfterItHasBeenCheckedError: Expression has changed after it was checked.
    if (this.focused) {
      setTimeout(() => this.element.nativeElement.focus(), 0);
    }
  }
}

Fastest way to duplicate an array in JavaScript - slice vs. 'for' loop

a.map(e => e) is another alternative for this job. As of today .map() is very fast (almost as fast as .slice(0)) in Firefox, but not in Chrome.

On the other hand, if an array is multi-dimensional, since arrays are objects and objects are reference types, none of the slice or concat methods will be a cure... So one proper way of cloning an array is an invention of Array.prototype.clone() as follows.

_x000D_
_x000D_
Array.prototype.clone = function(){_x000D_
  return this.map(e => Array.isArray(e) ? e.clone() : e);_x000D_
};_x000D_
_x000D_
var arr = [ 1, 2, 3, 4, [ 1, 2, [ 1, 2, 3 ], 4 , 5], 6 ],_x000D_
    brr = arr.clone();_x000D_
brr[4][2][1] = "two";_x000D_
console.log(JSON.stringify(arr));_x000D_
console.log(JSON.stringify(brr));
_x000D_
_x000D_
_x000D_

Constraint Layout Vertical Align Center

 <TextView
        android:id="@+id/tvName"
        style="@style/textViewBoldLarge"
        android:layout_width="wrap_content"
        android:layout_height="wrap_content"
        android:layout_marginTop="10dp"
        android:text="Welcome"
        app:layout_constraintLeft_toLeftOf="parent"
        app:layout_constraintRight_toRightOf="parent"
        app:layout_constraintTop_toTopOf="parent"
        app:layout_constraintBottom_toBottomOf="parent"/>

Get all child views inside LinearLayout at once

((ViewGroup) findViewById(android.R.id.content));// you can use this in an Activity to get your layout root view, then pass it to findAllEdittexts() method below.

Here I am iterating only EdiTexts, if you want all Views you can replace EditText with View.

SparseArray<EditText> array = new SparseArray<EditText>();

private void findAllEdittexts(ViewGroup viewGroup) {

    int count = viewGroup.getChildCount();
    for (int i = 0; i < count; i++) {
        View view = viewGroup.getChildAt(i);
        if (view instanceof ViewGroup)
            findAllEdittexts((ViewGroup) view);
        else if (view instanceof EditText) {
            EditText edittext = (EditText) view;
            array.put(editText.getId(), editText);
        }
    }
}

How do search engines deal with AngularJS applications?

Use PushState and Precomposition

The current (2015) way to do this is using the JavaScript pushState method.

PushState changes the URL in the top browser bar without reloading the page. Say you have a page containing tabs. The tabs hide and show content, and the content is inserted dynamically, either using AJAX or by simply setting display:none and display:block to hide and show the correct tab content.

When the tabs are clicked, use pushState to update the url in the address bar. When the page is rendered, use the value in the address bar to determine which tab to show. Angular routing will do this for you automatically.

Precomposition

There are two ways to hit a PushState Single Page App (SPA)

  1. Via PushState, where the user clicks a PushState link and the content is AJAXed in.
  2. By hitting the URL directly.

The initial hit on the site will involve hitting the URL directly. Subsequent hits will simply AJAX in content as the PushState updates the URL.

Crawlers harvest links from a page then add them to a queue for later processing. This means that for a crawler, every hit on the server is a direct hit, they don't navigate via Pushstate.

Precomposition bundles the initial payload into the first response from the server, possibly as a JSON object. This allows the Search Engine to render the page without executing the AJAX call.

There is some evidence to suggest that Google might not execute AJAX requests. More on this here:

https://web.archive.org/web/20160318211223/http://www.analog-ni.co/precomposing-a-spa-may-become-the-holy-grail-to-seo

Search Engines can read and execute JavaScript

Google has been able to parse JavaScript for some time now, it's why they originally developed Chrome, to act as a full featured headless browser for the Google spider. If a link has a valid href attribute, the new URL can be indexed. There's nothing more to do.

If clicking a link in addition triggers a pushState call, the site can be navigated by the user via PushState.

Search Engine Support for PushState URLs

PushState is currently supported by Google and Bing.

Google

Here's Matt Cutts responding to Paul Irish's question about PushState for SEO:

http://youtu.be/yiAF9VdvRPw

Here is Google announcing full JavaScript support for the spider:

http://googlewebmastercentral.blogspot.de/2014/05/understanding-web-pages-better.html

The upshot is that Google supports PushState and will index PushState URLs.

See also Google webmaster tools' fetch as Googlebot. You will see your JavaScript (including Angular) is executed.

Bing

Here is Bing's announcement of support for pretty PushState URLs dated March 2013:

http://blogs.bing.com/webmaster/2013/03/21/search-engine-optimization-best-practices-for-ajax-urls/

Don't use HashBangs #!

Hashbang urls were an ugly stopgap requiring the developer to provide a pre-rendered version of the site at a special location. They still work, but you don't need to use them.

Hashbang URLs look like this:

domain.com/#!path/to/resource

This would be paired with a metatag like this:

<meta name="fragment" content="!">

Google will not index them in this form, but will instead pull a static version of the site from the _escaped_fragments_ URL and index that.

Pushstate URLs look like any ordinary URL:

domain.com/path/to/resource

The difference is that Angular handles them for you by intercepting the change to document.location dealing with it in JavaScript.

If you want to use PushState URLs (and you probably do) take out all the old hash style URLs and metatags and simply enable HTML5 mode in your config block.

Testing your site

Google Webmaster tools now contains a tool which will allow you to fetch a URL as google, and render JavaScript as Google renders it.

https://www.google.com/webmasters/tools/googlebot-fetch

Generating PushState URLs in Angular

To generate real URLs in Angular, rather than # prefixed ones, set HTML5 mode on your $locationProvider object.

$locationProvider.html5Mode(true);

Server Side

Since you are using real URLs, you will need to ensure the same template (plus some precomposed content) gets shipped by your server for all valid URLs. How you do this will vary depending on your server architecture.

Sitemap

Your app may use unusual forms of navigation, for example hover or scroll. To ensure Google is able to drive your app, I would probably suggest creating a sitemap, a simple list of all the urls your app responds to. You can place this at the default location (/sitemap or /sitemap.xml), or tell Google about it using webmaster tools.

It's a good idea to have a sitemap anyway.

Browser support

Pushstate works in IE10. In older browsers, Angular will automatically fall back to hash style URLs

A demo page

The following content is rendered using a pushstate URL with precomposition:

http://html5.gingerhost.com/london

As can be verified, at this link, the content is indexed and is appearing in Google.

Serving 404 and 301 Header status codes

Because the search engine will always hit your server for every request, you can serve header status codes from your server and expect Google to see them.

Convenient way to parse incoming multipart/form-data parameters in a Servlet

Solutions:

Solution A:

  1. Download http://www.servlets.com/cos/index.html
  2. Invoke getParameters() on com.oreilly.servlet.MultipartRequest

Solution B:

  1. Download http://jakarta.Apache.org/commons/fileupload/
  2. Invoke readHeaders() in org.apache.commons.fileupload.MultipartStream

Solution C:

  1. Download http://users.boone.net/wbrameld/multipartformdata/
  2. Invoke getParameter on com.bigfoot.bugar.servlet.http.MultipartFormData

Solution D:

Use Struts. Struts 1.1 handles this automatically.

Reference: http://www.jguru.com/faq/view.jsp?EID=1045507

c# - approach for saving user settings in a WPF application?

I typically do this sort of thing by defining a custom [Serializable] settings class and simply serializing it to disk. In your case you could just as easily store it as a string blob in your SQLite database.

Closing JFrame with button click

It appears to me that you have two issues here. One is that JFrame does not have a close method, which has been addressed in the other answers.

The other is that you're having trouble referencing your JFrame. Within actionPerformed, super refers to ActionListener. To refer to the JFrame instance there, use MyExtendedJFrame.super instead (you should also be able to use MyExtendedJFrame.this, as I see no reason why you'd want to override the behaviour of dispose or setVisible).

Exec : display stdout "live"

exec will also return a ChildProcess object that is an EventEmitter.

var exec = require('child_process').exec;
var coffeeProcess = exec('coffee -cw my_file.coffee');

coffeeProcess.stdout.on('data', function(data) {
    console.log(data); 
});

OR pipe the child process's stdout to the main stdout.

coffeeProcess.stdout.pipe(process.stdout);

OR inherit stdio using spawn

spawn('coffee -cw my_file.coffee', { stdio: 'inherit' });

How can I test an AngularJS service from the console?

Angularjs Dependency Injection framework is responsible for injecting the dependancies of you app module to your controllers. This is possible through its injector.

You need to first identify the ng-app and get the associated injector. The below query works to find your ng-app in the DOM and retrieve the injector.

angular.element('*[ng-app]').injector()

In chrome, however, you can point to target ng-app as shown below. and use the $0 hack and issue angular.element($0).injector()

Once you have the injector, get any dependency injected service as below

injector = angular.element($0).injector();
injector.get('$mdToast');

enter image description here

Shortcut for creating single item list in C#

I've got this little function:

public static class CoreUtil
{    
    public static IEnumerable<T> ToEnumerable<T>(params T[] items)
    {
        return items;
    }
}

Since it doesn't prescribe a concrete return type this is so generic that I use it all over the place. Your code would look like

CoreUtil.ToEnumerable("title").ToList();

But of course it also allows

CoreUtil.ToEnumerable("title1", "title2", "title3").ToArray();

I often use it in when I have to append/prepend one item to the output of a LINQ statement. For instance to add a blank item to a selection list:

CoreUtil.ToEnumerable("").Concat(context.TrialTypes.Select(t => t.Name))

Saves a few ToList() and Add statements.

(Late answer, but I stumbled upon this oldie and thought this could be helpful)

How to plot a function curve in R

I did some searching on the web, and this are some ways that I found:

The easiest way is using curve without predefined function

curve(x^2, from=1, to=50, , xlab="x", ylab="y")

enter image description here

You can also use curve when you have a predfined function

eq = function(x){x*x}
curve(eq, from=1, to=50, xlab="x", ylab="y")

enter image description here

If you want to use ggplot,

library("ggplot2")
eq = function(x){x*x}
ggplot(data.frame(x=c(1, 50)), aes(x=x)) + 
  stat_function(fun=eq)

enter image description here

How do you embed binary data in XML?

XML is so versatile...

<DATA>
  <BINARY>
    <BIT index="0">0</BIT>
    <BIT index="1">0</BIT>
    <BIT index="2">1</BIT>
    ...
    <BIT index="n">1</BIT>
  </BINARY>
</DATA>

XML is like violence - If it doesn't solve your problem, you're not using enough of it.

EDIT:

BTW: Base64 + CDATA is probably the best solution

(EDIT2:
Whoever upmods me, please also upmod the real answer. We don't want any poor soul to come here and actually implement my method because it was the highest ranked on SO, right?)

How to use the ProGuard in Android Studio?

You can configure your build.gradle file for proguard implementation. It can be at module level or the project level.

 buildTypes {

    debug {
        minifyEnabled false
        proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.txt'

    }

}

The configuration shown is for debug level but you can write you own build flavors like shown below inside buildTypes:

    myproductionbuild{
        minifyEnabled true
        proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.txt'
    }

Better to have your debug with minifyEnabled false and productionbuild and other builds as minifyEnabled true.

Copy your proguard-rules.txt file in the root of your module or project folder like

$YOUR_PROJECT_DIR\YoutProject\yourmodule\proguard-rules.txt

You can change the name of your file as you want. After configuration use one of the three options available to generate your build as per the buildType

  1. Go to gradle task in right panel and search for assembleRelease/assemble(#your_defined_buildtype) under module tasks

  2. Go to Build Variant in Left Panel and select the build from drop down

  3. Go to project root directory in File Explorer and open cmd/terminal and run

Linux ./gradlew assembleRelease or assemble(#your_defined_buildtype)

Windows gradlew assembleRelease or assemble(#your_defined_buildtype)

You can find apk in your module/build directory.

More about the configuration and proguard files location is available at the link

http://tools.android.com/tech-docs/new-build-system/user-guide#TOC-Running-ProGuard

Message 'src refspec master does not match any' when pushing commits in Git

In my case, I forgot to include the .gitignore file. Here are all the steps required:

  1. Create an empty Git repository on remote,
  2. On local create the .gitignore file for your project. GitHub gives you a list of examples here
  3. Launch a terminal, and in your project do the following commands:

    git remote add origin YOUR/ORIGIN.git
    
    git add .
    
    git commit -m "initial commit or whatever message for first commit"
    
    git push -u origin master
    

Calling JavaScript Function From CodeBehind

Try This in Code Behind and it will worked 100%

Write this line of code in you Code Behind file

string script = "window.onload = function() { YourJavaScriptFunctionName(); };";
ClientScript.RegisterStartupScript(this.GetType(), "YourJavaScriptFunctionName", script, true);

And this is the web form page

<script type="text/javascript">
    function YourJavaScriptFunctionName() {
        alert("Test!")
    }
</script>

Reload .profile in bash shell script (in unix)?

The bash script runs in a separate subshell. In order to make this work you will need to source this other script as well.

Differences between time complexity and space complexity?

First of all, the space complexity of this loop is O(1) (the input is customarily not included when calculating how much storage is required by an algorithm).

So the question that I have is if its possible that an algorithm has different time complexity from space complexity?

Yes, it is. In general, the time and the space complexity of an algorithm are not related to each other.

Sometimes one can be increased at the expense of the other. This is called space-time tradeoff.

How to 'insert if not exists' in MySQL?

Update or insert without known primary key

If you already have a unique or primary key, the other answers with either INSERT INTO ... ON DUPLICATE KEY UPDATE ... or REPLACE INTO ... should work fine (note that replace into deletes if exists and then inserts - thus does not partially update existing values).

But if you have the values for some_column_id and some_type, the combination of which are known to be unique. And you want to update some_value if exists, or insert if not exists. And you want to do it in just one query (to avoid using a transaction). This might be a solution:

INSERT INTO my_table (id, some_column_id, some_type, some_value)
SELECT t.id, t.some_column_id, t.some_type, t.some_value
FROM (
    SELECT id, some_column_id, some_type, some_value
    FROM my_table
    WHERE some_column_id = ? AND some_type = ?
    UNION ALL
    SELECT s.id, s.some_column_id, s.some_type, s.some_value
    FROM (SELECT NULL AS id, ? AS some_column_id, ? AS some_type, ? AS some_value) AS s
) AS t
LIMIT 1
ON DUPLICATE KEY UPDATE
some_value = ?

Basically, the query executes this way (less complicated than it may look):

  • Select an existing row via the WHERE clause match.
  • Union that result with a potential new row (table s), where the column values are explicitly given (s.id is NULL, so it will generate a new auto-increment identifier).
  • If an existing row is found, then the potential new row from table s is discarded (due to LIMIT 1 on table t), and it will always trigger an ON DUPLICATE KEY which will UPDATE the some_value column.
  • If an existing row is not found, then the potential new row is inserted (as given by table s).

Note: Every table in a relational database should have at least a primary auto-increment id column. If you don't have this, add it, even when you don't need it at first sight. It is definitely needed for this "trick".

How can I concatenate a string and a number in Python?

Python is strongly typed. There are no implicit type conversions.

You have to do one of these:

"asd%d" % 9
"asd" + str(9)

What is the difference between <jsp:include page = ... > and <%@ include file = ... >?

Java Revisited

  1. Resource included by include directive is loaded during jsp translation time, while resource included by include action is loaded during request time.
  2. Any change on included resource will not be visible in case of include directive until jsp file compiles again. While in case of include action, any change in included resource will be visible in next request.
  3. Include directive is static import, while include action is dynamic import
  4. Include directive uses file attribute to specify resource to be included while include action use page attribute for same purpose.

How to disable RecyclerView scrolling?

I have been struggling in this issue for some hour, So I would like to share my experience, For the layoutManager solution it is fine but if u want to reEnable scrolling the recycler will back to top.

The best solution so far (for me at least) is using @Zsolt Safrany methode but adding getter and setter so you don't have to remove or add the OnItemTouchListener.

As follow

public class RecyclerViewDisabler implements RecyclerView.OnItemTouchListener {

    boolean isEnable = true;

    public RecyclerViewDisabler(boolean isEnable) {
        this.isEnable = isEnable;
    }

    public boolean isEnable() {
        return isEnable;
    }

    public void setEnable(boolean enable) {
        isEnable = enable;
    }

    @Override
    public boolean onInterceptTouchEvent(RecyclerView rv, MotionEvent e) {
        return !isEnable;
    }

    @Override
    public void onTouchEvent(RecyclerView rv, MotionEvent e) {}

   @Override
   public void onRequestDisallowInterceptTouchEvent(boolean disallowIntercept){}
 }

Usage

RecyclerViewDisabler disabler = new RecyclerViewDisabler(true);
feedsRecycler.addOnItemTouchListener(disabler);

// TO ENABLE/DISABLE JUST USE THIS
disabler.setEnable(enable);

"The page has expired due to inactivity" - Laravel 5.5

My case was solved with SESSION_DOMAIN, in my local machine had to be set to xxx.localhost. It was causing conflicts with the production SESSION_DOMAIN, xxx.com that was set directly in the session.php config file.

Callback functions in Java

This is very easy in Java 8 with lambdas.

public interface Callback {
    void callback();
}

public class Main {
    public static void main(String[] args) {
        methodThatExpectsACallback(() -> System.out.println("I am the callback."));
    }
    private static void methodThatExpectsACallback(Callback callback){
        System.out.println("I am the method.");
        callback.callback();
    }
}

PHP How to find the time elapsed since a date time?

Be warned, the majority of the mathematically calculated examples have a hard limit of 2038-01-18 dates and will not work with fictional dates.

As there was a lack of DateTime and DateInterval based examples, I wanted to provide a multi-purpose function that satisfies the OP's need and others wanting compound elapsed periods, such as 1 month 2 days ago. Along with a bunch of other use cases, such as a limit to display the date instead of the elapsed time, or to filter out portions of the elapsed time result.

Additionally the majority of the examples assume elapsed is from the current time, where the below function allows for it to be overridden with the desired end date.

/**
 * multi-purpose function to calculate the time elapsed between $start and optional $end
 * @param string|null $start the date string to start calculation
 * @param string|null $end the date string to end calculation
 * @param string $suffix the suffix string to include in the calculated string
 * @param string $format the format of the resulting date if limit is reached or no periods were found
 * @param string $separator the separator between periods to use when filter is not true
 * @param null|string $limit date string to stop calculations on and display the date if reached - ex: 1 month
 * @param bool|array $filter false to display all periods, true to display first period matching the minimum, or array of periods to display ['year', 'month']
 * @param int $minimum the minimum value needed to include a period
 * @return string
 */
function elapsedTimeString($start, $end = null, $limit = null, $filter = true, $suffix = 'ago', $format = 'Y-m-d', $separator = ' ', $minimum = 1)
{
    $dates = (object) array(
        'start' => new DateTime($start ? : 'now'),
        'end' => new DateTime($end ? : 'now'),
        'intervals' => array('y' => 'year', 'm' => 'month', 'd' => 'day', 'h' => 'hour', 'i' => 'minute', 's' => 'second'),
        'periods' => array()
    );
    $elapsed = (object) array(
        'interval' => $dates->start->diff($dates->end),
        'unknown' => 'unknown'
    );
    if ($elapsed->interval->invert === 1) {
        return trim('0 seconds ' . $suffix);
    }
    if (false === empty($limit)) {
        $dates->limit = new DateTime($limit);
        if (date_create()->add($elapsed->interval) > $dates->limit) {
            return $dates->start->format($format) ? : $elapsed->unknown;
        }
    }
    if (true === is_array($filter)) {
        $dates->intervals = array_intersect($dates->intervals, $filter);
        $filter = false;
    }
    foreach ($dates->intervals as $period => $name) {
        $value = $elapsed->interval->$period;
        if ($value >= $minimum) {
            $dates->periods[] = vsprintf('%1$s %2$s%3$s', array($value, $name, ($value !== 1 ? 's' : '')));
            if (true === $filter) {
                break;
            }
        }
    }
    if (false === empty($dates->periods)) {
        return trim(vsprintf('%1$s %2$s', array(implode($separator, $dates->periods), $suffix)));
    }

    return $dates->start->format($format) ? : $elapsed->unknown;
}

One thing to note - the retrieved intervals for the supplied filter values do not carry over to the next period. The filter merely displays the resulting value of the supplied periods and does not recalculate the periods to display only the desired filter total.


Usage

For the OP's need of displaying the highest period (as of 2015-02-24).

echo elapsedTimeString('2010-04-26');
/** 4 years ago */

To display compound periods and supply a custom end date (note the lack of time supplied and fictional dates).

echo elapsedTimeString('1920-01-01', '2500-02-24', null, false);
/** 580 years 1 month 23 days ago */

To display the result of filtered periods (ordering of array doesn't matter).

echo elapsedTimeString('2010-05-26', '2012-02-24', null, ['month', 'year']);
/** 1 year 8 months ago */

To display the start date in the supplied format (default Y-m-d) if the limit is reached.

echo elapsedTimeString('2010-05-26', '2012-02-24', '1 year');
/** 2010-05-26 */

There are bunch of other use cases. It can also easily be adapted to accept unix timestamps and/or DateInterval objects for the start, end, or limit arguments.

How to convert list data into json in java

public static List<Product> getCartList() {

    JSONObject responseDetailsJson = new JSONObject();
    JSONArray jsonArray = new JSONArray();

    List<Product> cartList = new Vector<Product>(cartMap.keySet().size());
    for(Product p : cartMap.keySet()) {
        cartList.add(p);
        JSONObject formDetailsJson = new JSONObject();
        formDetailsJson.put("id", "1");
        formDetailsJson.put("name", "name1");
       jsonArray.add(formDetailsJson);
    }
    responseDetailsJson.put("forms", jsonArray);//Here you can see the data in json format

    return cartList;

}

you can get the data in the following form

{
    "forms": [
        { "id": "1", "name": "name1" },
        { "id": "2", "name": "name2" } 
    ]
}

How can I set a DateTimePicker control to a specific date?

Also, we can assign the Value to the Control in Designer Class (i.e. FormName.Designer.cs).

DateTimePicker1.Value = DateTime.Now;

This way you always get Current Date...

How to navigate through a vector using iterators? (C++)

Vector's iterators are random access iterators which means they look and feel like plain pointers.

You can access the nth element by adding n to the iterator returned from the container's begin() method, or you can use operator [].

std::vector<int> vec(10);
std::vector<int>::iterator it = vec.begin();

int sixth = *(it + 5);
int third = *(2 + it);
int second = it[1];

Alternatively you can use the advance function which works with all kinds of iterators. (You'd have to consider whether you really want to perform "random access" with non-random-access iterators, since that might be an expensive thing to do.)

std::vector<int> vec(10);
std::vector<int>::iterator it = vec.begin();

std::advance(it, 5);
int sixth = *it;

How to Refresh a Component in Angular

Adding this to code to the required component's constructor worked for me.

this.router.routeReuseStrategy.shouldReuseRoute = function () {
  return false;
};

this.mySubscription = this.router.events.subscribe((event) => {
  if (event instanceof NavigationEnd) {
    // Trick the Router into believing it's last link wasn't previously loaded
    this.router.navigated = false;
  }
});

Make sure to unsubscribe from this mySubscription in ngOnDestroy().

ngOnDestroy() {
  if (this.mySubscription) {
    this.mySubscription.unsubscribe();
  }
}

Refer to this thread for more details - https://github.com/angular/angular/issues/13831

Text not wrapping inside a div element

That's because there are no spaces in that long string so it has to break out of its container. Add word-break:break-all; to your .title rules to force a break.

#calendar_container > #events_container > .event_block > .title {
    width:400px;
    font-size:12px;
    word-break:break-all;
}

jsFiddle example

Pure JavaScript equivalent of jQuery's $.ready() - how to call a function when the page/DOM is ready for it

The good folks at HubSpot have a resource where you can find pure Javascript methodologies for achieving a lot of jQuery goodness - including ready

http://youmightnotneedjquery.com/#ready

function ready(fn) {
  if (document.readyState != 'loading'){
    fn();
  } else if (document.addEventListener) {
    document.addEventListener('DOMContentLoaded', fn);
  } else {
    document.attachEvent('onreadystatechange', function() {
      if (document.readyState != 'loading')
        fn();
    });
  }
}

example inline usage:

ready(function() { alert('hello'); });

Only mkdir if it does not exist

Do a test

[[ -d dir ]] || mkdir dir

Or use -p option:

mkdir -p dir

Python readlines() usage and efficient practice for reading

Read line by line, not the whole file:

for line in open(file_name, 'rb'):
    # process line here

Even better use with for automatically closing the file:

with open(file_name, 'rb') as f:
    for line in f:
        # process line here

The above will read the file object using an iterator, one line at a time.

No newline after div?

Have you considered using span instead of div? It is the in-line version of div.

lexers vs parsers

To answer the question as asked (without repeating unduly what appears in other answers)

Lexers and parsers are not very different, as suggested by the accepted answer. Both are based on simple language formalisms: regular languages for lexers and, almost always, context-free (CF) languages for parsers. They both are associated with fairly simple computational models, the finite state automaton and the push-down stack automaton. Regular languages are a special case of context-free languages, so that lexers could be produced with the somewhat more complex CF technology. But it is not a good idea for at least two reasons.

A fundamental point in programming is that a system component should be buit with the most appropriate technology, so that it is easy to produce, to understand and to maintain. The technology should not be overkill (using techniques much more complex and costly than needed), nor should it be at the limit of its power, thus requiring technical contortions to achieve the desired goal.

That is why "It seems fashionable to hate regular expressions". Though they can do a lot, they sometimes require very unreadable coding to achieve it, not to mention the fact that various extensions and restrictions in implementation somewhat reduce their theoretical simplicity. Lexers do not usually do that, and are usually a simple, efficient, and appropriate technology to parse token. Using CF parsers for token would be overkill, though it is possible.

Another reason not to use CF formalism for lexers is that it might then be tempting to use the full CF power. But that might raise sructural problems regarding the reading of programs.

Fundamentally, most of the structure of program text, from which meaning is extracted, is a tree structure. It expresses how the parse sentence (program) is generated from syntax rules. Semantics is derived by compositional techniques (homomorphism for the mathematically oriented) from the way syntax rules are composed to build the parse tree. Hence the tree structure is essential. The fact that tokens are identified with a regular set based lexer does not change the situation, because CF composed with regular still gives CF (I am speaking very loosely about regular transducers, that transform a stream of characters into a stream of token).

However, CF composed with CF (via CF transducers ... sorry for the math), does not necessarily give CF, and might makes things more general, but less tractable in practice. So CF is not the appropriate tool for lexers, even though it can be used.

One of the major differences between regular and CF is that regular languages (and transducers) compose very well with almost any formalism in various ways, while CF languages (and transducers) do not, not even with themselves (with a few exceptions).

(Note that regular transducers may have others uses, such as formalization of some syntax error handling techniques.)

BNF is just a specific syntax for presenting CF grammars.

EBNF is a syntactic sugar for BNF, using the facilities of regular notation to give terser version of BNF grammars. It can always be transformed into an equivalent pure BNF.

However, the regular notation is often used in EBNF only to emphasize these parts of the syntax that correspond to the structure of lexical elements, and should be recognized with the lexer, while the rest with be rather presented in straight BNF. But it is not an absolute rule.

To summarize, the simpler structure of token is better analyzed with the simpler technology of regular languages, while the tree oriented structure of the language (of program syntax) is better handled by CF grammars.

I would suggest also looking at AHR's answer.

But this leaves a question open: Why trees?

Trees are a good basis for specifying syntax because

  • they give a simple structure to the text

  • there are very convenient for associating semantics with the text on the basis of that structure, with a mathematically well understood technology (compositionality via homomorphisms), as indicated above. It is a fundamental algebraic tool to define the semantics of mathematical formalisms.

Hence it is a good intermediate representation, as shown by the success of Abstract Syntax Trees (AST). Note that AST are often different from parse tree because the parsing technology used by many professionals (Such as LL or LR) applies only to a subset of CF grammars, thus forcing grammatical distorsions which are later corrected in AST. This can be avoided with more general parsing technology (based on dynamic programming) that accepts any CF grammar.

Statement about the fact that programming languages are context-sensitive (CS) rather than CF are arbitrary and disputable.

The problem is that the separation of syntax and semantics is arbitrary. Checking declarations or type agreement may be seen as either part of syntax, or part of semantics. The same would be true of gender and number agreement in natural languages. But there are natural languages where plural agreement depends on the actual semantic meaning of words, so that it does not fit well with syntax.

Many definitions of programming languages in denotational semantics place declarations and type checking in the semantics. So stating as done by Ira Baxter that CF parsers are being hacked to get a context sensitivity required by syntax is at best an arbitrary view of the situation. It may be organized as a hack in some compilers, but it does not have to be.

Also it is not just that CS parsers (in the sense used in other answers here) are hard to build, and less efficient. They are are also inadequate to express perspicuously the kinf of context-sensitivity that might be needed. And they do not naturally produce a syntactic structure (such as parse-trees) that is convenient to derive the semantics of the program, i.e. to generate the compiled code.

Restricting JTextField input to Integers

Here's one approach that uses a keylistener,but uses the keyChar (instead of the keyCode):

http://edenti.deis.unibo.it/utils/Java-tips/Validating%20numerical%20input%20in%20a%20JTextField.txt

 keyText.addKeyListener(new KeyAdapter() {
    public void keyTyped(KeyEvent e) {
      char c = e.getKeyChar();
      if (!((c >= '0') && (c <= '9') ||
         (c == KeyEvent.VK_BACK_SPACE) ||
         (c == KeyEvent.VK_DELETE))) {
        getToolkit().beep();
        e.consume();
      }
    }
  });

Another approach (which personally I find almost as over-complicated as Swing's JTree model) is to use Formatted Text Fields:

http://docs.oracle.com/javase/tutorial/uiswing/components/formattedtextfield.html

Read from a gzip file in python

Try gzipping some data through the gzip libary like this...

import gzip
content = "Lots of content here"
f = gzip.open('Onlyfinnaly.log.gz', 'wb')
f.write(content)
f.close()

... then run your code as posted ...

import gzip
f=gzip.open('Onlyfinnaly.log.gz','rb')
file_content=f.read()
print file_content

This method worked for me as for some reason the gzip library fails to read some files.

How do I add a placeholder on a CharField in Django?

Great question. There are three solutions I know about:

Solution #1

Replace the default widget.

class SearchForm(forms.Form):  
    q = forms.CharField(
            label='Search',
            widget=forms.TextInput(attrs={'placeholder': 'Search'})
        )

Solution #2

Customize the default widget. If you're using the same widget that the field usually uses then you can simply customize that one instead of instantiating an entirely new one.

class SearchForm(forms.Form):  
    q = forms.CharField(label='Search')

    def __init__(self, *args, **kwargs):
        super().__init__(*args, **kwargs)
        self.fields['q'].widget.attrs.update({'placeholder': 'Search'})

Solution #3

Finally, if you're working with a model form then (in addition to the previous two solutions) you have the option to specify a custom widget for a field by setting the widgets attribute of the inner Meta class.

class CommentForm(forms.ModelForm):  
    class Meta:
        model = Comment
        widgets = {
            'body': forms.Textarea(attrs={'cols': 80, 'rows': 20})
        }

Easiest way to compare arrays in C#

SequenceEqual will only return true if two conditions or met.

  1. They contain the same elements.
  2. The elements are in the same order.

If you only want to check if they contain the same elements regardless of their order and your problem is of the type

Does values2 contain all the values contained in values1?

you can use LINQ extension method Enumerable.Except and then check if the result has any value. Here's an example

int[] values1 = { 1, 2, 3, 4 };
int[] values2 = { 1, 2, 5 };
var result = values1.Except(values2);
if(result.Count()==0)
{
   //They are the same
}
else
{
    //They are different
}

And also by using this you get the different items as well automatically. Two birds with one stone.

Keep in mind, if you execute your code like this

var result = values2.Except(values1);

you will get different results.

In my case I have a local copy of an array and want to check if anything has been removed from the original array so I use this method.

Uncaught ReferenceError: function is not defined with onclick

I got this resolved in angular with (click) = "someFuncionName()" in the .html file for the specific component.

Cross-Origin Request Blocked

@Egidius, when creating an XMLHttpRequest, you should use

var xhr = new XMLHttpRequest({mozSystem: true});

What is mozSystem?

mozSystem Boolean: Setting this flag to true allows making cross-site connections without requiring the server to opt-in using CORS. Requires setting mozAnon: true, i.e. this can't be combined with sending cookies or other user credentials. This only works in privileged (reviewed) apps; it does not work on arbitrary webpages loaded in Firefox.

Changes to your Manifest

On your manifest, do not forget to include this line on your permissions:

"permissions": {
       "systemXHR" : {},
}

Get value of Span Text

You need to change your code as below:

<html>
<body>

<span id="span_Id">Click the button to display the content.</span>

<button onclick="displayDate()">Click Me</button>

<script>
function displayDate() {
   var span_Text = document.getElementById("span_Id").innerText;
   alert (span_Text);
}
</script>
</body>
</html>

how to use the Box-Cox power transformation in R

If I want tranfer only the response variable y instead of a linear model with x specified, eg I wanna transfer/normalize a list of data, I can take 1 for x, then the object becomes a linear model:

library(MASS)
y = rf(500,30,30)
hist(y,breaks = 12)
result = boxcox(y~1, lambda = seq(-5,5,0.5))
mylambda = result$x[which.max(result$y)]
mylambda
y2 = (y^mylambda-1)/mylambda
hist(y2)

How to edit the size of the submit button on a form?

<input type="button" value="submit" style="height: 100px; width: 100px; left: 250; top: 250;">

Use this with your requirements.

Count number of rows matching a criteria

to get the number of observations the number of rows from your Dataset would be more valid:

nrow(dat[dat$sCode == "CA",])

Is there an eval() function in Java?

I could advise you to use Exp4j. It is easy to understand as you can see from the following example code:

Expression e = new ExpressionBuilder("3 * sin(y) - 2 / (x - 2)")
    .variables("x", "y")
    .build()
    .setVariable("x", 2.3)
    .setVariable("y", 3.14);
double result = e.evaluate();

How to stop C# console applications from closing automatically?

Console.ReadLine() to wait for the user to Enter or Console.ReadKey to wait for any key.

How to run Tensorflow on CPU

You can also set the environment variable to

CUDA_VISIBLE_DEVICES=""

without having to modify the source code.

Why does comparing strings using either '==' or 'is' sometimes produce a different result?

is is identity testing and == is equality testing. This means is is a way to check whether two things are the same things, or just equivalent.

Say you've got a simple person object. If it is named 'Jack' and is '23' years old, it's equivalent to another 23-year-old Jack, but it's not the same person.

class Person(object):
   def __init__(self, name, age):
       self.name = name
       self.age = age

   def __eq__(self, other):
       return self.name == other.name and self.age == other.age

jack1 = Person('Jack', 23)
jack2 = Person('Jack', 23)

jack1 == jack2 # True
jack1 is jack2 # False

They're the same age, but they're not the same instance of person. A string might be equivalent to another, but it's not the same object.

Multiline text in JLabel

This is horrifying. All these answers suggesting adding to the start of the label text, and there is not one word in the Java 11 (or earlier) documentation for JLabel to suggest that the text of a label is handled differently if it happens to start with <html>. Who says that works everywhere and always will? And you can get big, big surprises wrapping arbitrary text in and handing it to an html layout engine.

I've upvoted the answer that suggests JTextArea. But I'll note that JTextArea isn't a drop-in replacement; by default it expands to fill rows, which is not how JLabel acts. I haven't come up with a solution to that yet.

Github: Can I see the number of downloads for a repo?

GitHub has deprecated the download support and now supports 'Releases' - https://github.com/blog/1547-release-your-software. To create a release either use the GitHub UI or create an annotated tag (http:// git-scm.com/book/ch2-6.html) and add release notes to it in GitHub. You can then upload binaries, or 'assets', to each release.

Once you have some releases, the GitHub API supports getting information about them, and their assets.

curl -i \
https://api.github.com/repos/:owner/:repo/releases \
-H "Accept: application/vnd.github.manifold-preview+json"

Look for the 'download_count' entry. Theres more info at http://developer.github.com/v3/repos/releases/. This part of the API is still in the preview period ATM so it may change.

Update Nov 2013:

GitHub's releases API is now out of the preview period so the 'Accept' header is no longer needed - http://developer.github.com/changes/2013-11-04-releases-api-is-official/

It won't do any harm to continue to add the 'Accept' header though.

Pass variables by reference in JavaScript

I like to solve the lack of by reference in JavaScript like this example shows.

The essence of this is that you don't try to create a by reference. You instead use the return functionality and make it able to return multiple values. So there isn't any need to insert your values in arrays or objects.

_x000D_
_x000D_
var x = "First";
var y = "Second";
var z = "Third";

log('Before call:',x,y,z);
with (myFunc(x, y, z)) {x = a; y = b; z = c;} // <-- Way to call it
log('After call :',x,y,z);


function myFunc(a, b, c) {
  a = "Changed first parameter";
  b = "Changed second parameter";
  c = "Changed third parameter";
  return {a:a, b:b, c:c}; // <-- Return multiple values
}

function log(txt,p1,p2,p3) {
  document.getElementById('msg').innerHTML += txt + '<br>' + p1 + '<br>' + p2 + '<br>' + p3 + '<br><br>'
}
_x000D_
<div id='msg'></div>
_x000D_
_x000D_
_x000D_

Converting a date string to a DateTime object using Joda Time library

An simple method :

public static DateTime transfStringToDateTime(String dateParam, Session session) throws NotesException {
    DateTime dateRetour;
    dateRetour = session.createDateTime(dateParam);                 

    return dateRetour;
}

How many threads is too many?

As many threads as the CPU cores is what I've heard very often.

Convert a tensor to numpy array in Tensorflow?

You need to:

  1. encode the image tensor in some format (jpeg, png) to binary tensor
  2. evaluate (run) the binary tensor in a session
  3. turn the binary to stream
  4. feed to PIL image
  5. (optional) displaythe image with matplotlib

Code:

import tensorflow as tf
import matplotlib.pyplot as plt
import PIL

...

image_tensor = <your decoded image tensor>
jpeg_bin_tensor = tf.image.encode_jpeg(image_tensor)

with tf.Session() as sess:
    # display encoded back to image data
    jpeg_bin = sess.run(jpeg_bin_tensor)
    jpeg_str = StringIO.StringIO(jpeg_bin)
    jpeg_image = PIL.Image.open(jpeg_str)
    plt.imshow(jpeg_image)

This worked for me. You can try it in a ipython notebook. Just don't forget to add the following line:

%matplotlib inline

Store output of sed into a variable

Use command substitution like this:

line=$(sed -n '2p' myfile)
echo "$line"

Also note that there is no space around the = sign.

PHPExcel How to apply styles and set cell width and cell height to cell generated dynamically

You can use

$objWorksheet->getActiveSheet()->getRowDimension('1')->setRowHeight(40);
$objWorksheet->getActiveSheet()->getColumnDimension('A')->setWidth(100);

or define auto-size:

$objWorksheet->getRowDimension('1')->setRowHeight(-1);

How to check for file existence

# file? will only return true for files
File.file?(filename)

and

# Will also return true for directories - watch out!
File.exist?(filename)

How do I add items to an array in jQuery?

You are making an ajax request which is asynchronous therefore your console log of the list length occurs before the ajax request has completed.

The only way of achieving what you want is changing the ajax call to be synchronous. You can do this by using the .ajax and passing in asynch : false however this is not recommended as it locks the UI up until the call has returned, if it fails to return the user has to crash out of the browser.

MySQL Database won't start in XAMPP Manager-osx

This should work:
sudo /Applications/XAMPP/xamppfiles/bin/mysql.server start

Accessing the index in 'for' loops?

You can use the index method

ints = [8, 23, 45, 12, 78]
inds = [ints.index(i) for i in ints]

EDIT Highlighted in the comment that this method doesn’t work if there are duplicates in ints, the method below should work for any values in ints:

ints = [8, 8, 8, 23, 45, 12, 78]
inds = [tup[0] for tup in enumerate(ints)]

Or alternatively

ints = [8, 8, 8, 23, 45, 12, 78]
inds = [tup for tup in enumerate(ints)]

if you want to get both the index and the value in ints as a list of tuples.

It uses the method of enumerate in the selected answer to this question, but with list comprehension, making it faster with less code.

change cursor to finger pointer

I think the "best answer" above, albeit programmatically accurate, does not actually answer the question posed. the question asks how to change the pointer in the mouseover event. I see posts about how one may have an error somewhere is not answering the question. In the accepted answer, the mouseover event is blank (onmouseover="") and the style option, instead, is included. Baffling why this was done.

There may be nothing wrong with the inquirer's link. consider the following html:

<a id=test_link onclick="alert('kinda neat);">Click ME!</a>

When a user mouse's over this link, the pointer will not change to a hand...instead, the pointer will behave like it's hovering over normal text. One might not want this...and so, the mouse pointer needs to be told to change.

the answer being sought for is this (which was posted by another):

<a id=test_link onclick="alert('Nice!');"
       onmouseover="this.style.cursor='pointer';">Click ME!</a>

However, this is ... a nightmare if you have lots of these, or use this kind of thing all over the place and decide to make some kind of a change or run into a bug. better to make a CSS class for it:

a.lendhand {
  cursor: pointer;
}

then:

<a class=lendhand onclick="alert('hand is lent!');">Click ME!</a>

there are many other ways which would be, arguably, better than this method. DIVs, BUTTONs, IMGs, etc might prove more useful. I see no harm in using <a>...</a>, though.

jarett.

Python - Using regex to find multiple matches and print them out

Instead of using re.search use re.findall it will return you all matches in a List. Or you could also use re.finditer (which i like most to use) it will return an Iterator Object and you can just use it to iterate over all found matches.

line = 'bla bla bla<form>Form 1</form> some text...<form>Form 2</form> more text?'
for match in re.finditer('<form>(.*?)</form>', line, re.S):
    print match.group(1)

Difference between natural join and inner join

Inner join and natural join are almost same but there is a slight difference between them. The difference is in natural join no need to specify condition but in inner join condition is obligatory. If we do specify the condition in inner join , it resultant tables is like a cartesian product.

Accidentally committed .idea directory files into git

Add .idea directory to the list of ignored files

First, add it to .gitignore, so it is not accidentally committed by you (or someone else) again:

.idea

Remove it from repository

Second, remove the directory only from the repository, but do not delete it locally. To achieve that, do what is listed here:

Remove a file from a Git repository without deleting it from the local filesystem

Send the change to others

Third, commit the .gitignore file and the removal of .idea from the repository. After that push it to the remote(s).

Summary

The full process would look like this:

$ echo '.idea' >> .gitignore
$ git rm -r --cached .idea
$ git add .gitignore
$ git commit -m '(some message stating you added .idea to ignored entries)'
$ git push

(optionally you can replace last line with git push some_remote, where some_remote is the name of the remote you want to push to)

Programmatically close aspx page from code behind

For closing a asp.net windows application,

  • Just drag a button into the design page
  • Then write the below given code inside its click event

           "   this.close();  "
    

ie:

 private void button2_Click(object sender, EventArgs e)  
 {
        this.Close();
 }

Difference between fprintf, printf and sprintf?

In C, a "stream" is an abstraction; from the program's perspective it is simply a producer (input stream) or consumer (output stream) of bytes. It can correspond to a file on disk, to a pipe, to your terminal, or to some other device such as a printer or tty. The FILE type contains information about the stream. Normally, you don't mess with a FILE object's contents directly, you just pass a pointer to it to the various I/O routines.

There are three standard streams: stdin is a pointer to the standard input stream, stdout is a pointer to the standard output stream, and stderr is a pointer to the standard error output stream. In an interactive session, the three usually refer to your console, although you can redirect them to point to other files or devices:

$ myprog < inputfile.dat > output.txt 2> errors.txt

In this example, stdin now points to inputfile.dat, stdout points to output.txt, and stderr points to errors.txt.

fprintf writes formatted text to the output stream you specify.

printf is equivalent to writing fprintf(stdout, ...) and writes formatted text to wherever the standard output stream is currently pointing.

sprintf writes formatted text to an array of char, as opposed to a stream.

What does CultureInfo.InvariantCulture mean?

JetBrains offer a reasonable explanation,

"Ad-hoc conversion of data structures to text is largely dependent on the current culture, and may lead to unintended results when the code is executed on a machine whose locale differs from that of the original developer. To prevent ambiguities, ReSharper warns you of any instances in code where such a problem may occur."

but if I am working on a site I know will be in English only, I just ignore the suggestion.

Facebook Architecture

Well Facebook has undergone MANY many changes and it wasn't originally designed to be efficient. It was designed to do it's job. I have absolutely no idea what the code looks like and you probably won't find much info about it (for obvious security and copyright reasons), but just take a look at the API. Look at how often it changes and how much of it doesn't work properly, anymore, or at all.

I think the biggest ace up their sleeve is the Hiphop. http://developers.facebook.com/blog/post/358 You can use HipHop yourself: https://github.com/facebook/hiphop-php/wiki

But if you ask me it's a very ambitious and probably time wasting task. Hiphop only supports so much, it can't simply convert everything to C++. So what does this tell us? Well, it tells us that Facebook is NOT fully taking advantage of the PHP language. It's not using the latest 5.3 and I'm willing to bet there's still a lot that is PHP 4 compatible. Otherwise, they couldn't use HipHop. HipHop IS A GOOD IDEA and needs to grow and expand, but in it's current state it's not really useful for that many people who are building NEW PHP apps.

There's also PHP to JAVA via things like Resin/Quercus. Again, it doesn't support everything...

Another thing to note is that if you use any non-standard PHP module, you aren't going to be able to convert that code to C++ or Java either. However...Let's take a look at PHP modules. They are ARE compiled in C++. So if you can build PHP modules that do things (like parse XML, etc.) then you are basically (minus some interaction) working at the same speed. Of course you can't just make a PHP module for every possible need and your entire app because you would have to recompile and it would be much more difficult to code, etc.

However...There are some handy PHP modules that can help with speed concerns. Though at the end of the day, we have this awesome thing known as "the cloud" and with it, we can scale our applications (PHP included) so it doesn't matter as much anymore. Hardware is becoming cheaper and cheaper. Amazon just lowered it's prices (again) speaking of.

So as long as you code your PHP app around the idea that it will need to one day scale...Then I think you're fine and I'm not really sure I'd even look at Facebook and what they did because when they did it, it was a completely different world and now trying to hold up that infrastructure and maintain it...Well, you get things like HipHop.

Now how is HipHop going to help you? It won't. It can't. You're starting fresh, you can use PHP 5.3. I'd highly recommend looking into PHP 5.3 frameworks and all the new benefits that PHP 5.3 brings to the table along with the SPL libraries and also think about your database too. You're most likely serving up content from a database, so check out MongoDB and other types of databases that are schema-less and document-oriented. They are much much faster and better for the most "common" type of web site/app.

Look at NEW companies like Foursquare and Smugmug and some other companies that are utilizing NEW technology and HOW they are using it. For as successful as Facebook is, I honestly would not look at them for "how" to build an efficient web site/app. I'm not saying they don't have very (very) talented people that work there that are solving (their) problems creatively...I'm also not saying that Facebook isn't a great idea in general and that it's not successful and that you shouldn't get ideas from it....I'm just saying that if you could view their entire source code, you probably wouldn't benefit from it.

Check if a number has a decimal place/is a whole number

parseInt(num) === num

when passed a number, parseInt() just returns the number as int:

parseInt(3.3) === 3.3 // false because 3 !== 3.3
parseInt(3) === 3     // true

*ngIf and *ngFor on same element causing error

Table below only lists items that have a "beginner" value set. Requires both the *ngFor and the *ngIf to prevent unwanted rows in html.

Originally had *ngIf and *ngFor on the same <tr> tag, but doesn't work. Added a <div> for the *ngFor loop and placed *ngIf in the <tr> tag, works as expected.

<table class="table lessons-list card card-strong ">
  <tbody>
  <div *ngFor="let lesson of lessons" >
   <tr *ngIf="lesson.isBeginner">
    <!-- next line doesn't work -->
    <!-- <tr *ngFor="let lesson of lessons" *ngIf="lesson.isBeginner"> -->
    <td class="lesson-title">{{lesson.description}}</td>
    <td class="duration">
      <i class="fa fa-clock-o"></i>
      <span>{{lesson.duration}}</span>
    </td>
   </tr>
  </div>
  </tbody>

</table>

How do I remove my IntelliJ license in 2019.3?

in linux/ubuntu you can do, run following commands

cd ~/.config/JetBrains/PyCharm2020.1
rm eval/PyCharm201.evaluation.key
sed -i '/evlsprt/d' options/other.xml
cd ~/.java/.userPrefs/jetbrains
rm -rf pycharm*

How to install package from github repo in Yarn

This is described here: https://yarnpkg.com/en/docs/cli/add#toc-adding-dependencies

For example:

yarn add https://github.com/novnc/noVNC.git#0613d18

What are ABAP and SAP?

Attempt to provide simplified explanation:

SAP

  • Firstly it is a product.
  • Owner company, derives its name with the product name "SAP"
  • It is a management system (i.e. referred as ERP). Which means, this is a tool used for "managing the system" (domain specific - finance etc.).

Now, that SAP has created an environment around SAP. In order to operate in SAP environment (i.e. for customisations etc.), language-abstraction was required. Here comes ABAP.

ABAP

  • It is a language (high level), which is used in the SAP environment for customisations or new feature implementations.
  • It is high-level, because, it is known only in SAP environment.

Therefore, any customisation on the basic version of SAP given to some customer of SAP would require ABAP usage, otherwise, just delivered SAP is good enough for usage (i.e. no ABAP required).

Now is another term HANA.

HANA

  • This is an in-memory RDBMS.
  • Another tool/product by SAP, you would say, and its prime focus is to facilitate "analytics".
  • The way, this is designed, gives high compression (column-wise storage) and hence is majorly used for "READ" operations, which is why it is associated with "analysis".

SAP and HANA together abstracts the underlying complexity of database-access queries and UI (developed in java), together, to make the user experience good for the management system (used majorly in analytics, and so that the main focus stays in analytics). This very specific tool/product, is said as "technology", as it has an environment of its own (terminologies etc.). ABAP facilitates further development of the SAP-ERP.

The underlying development is in C, C++ (and ABAP) for SAP.

Display QImage with QtGui

Thanks All, I found how to do it, which is the same as Dave and Sergey:

I am using QT Creator:

In the main GUI window create using the drag drop GUI and create label (e.g. "myLabel")

In the callback of the button (clicked) do the following using the (*ui) pointer to the user interface window:

void MainWindow::on_pushButton_clicked()
{
     QImage imageObject;
     imageObject.load(imagePath);
     ui->myLabel->setPixmap(QPixmap::fromImage(imageObject));

     //OR use the other way by setting the Pixmap directly

     QPixmap pixmapObject(imagePath");
     ui->myLabel2->setPixmap(pixmapObject);
}

How do I delete all messages from a single queue using the CLI?

IMPORTANT NOTE: This will delete all users and config.

ALERT !!

ALERT !!

I don't suggest this answer until unless you want to delete data from all of the queues, including users and configs. Just Reset it !!!

rabbitmqctl stop_app
rabbitmqctl reset
rabbitmqctl start_app

How to convert an enum type variable to a string?

Here's the Old Skool method (used to be used extensively in gcc) using just the C pre-processor. Useful if you're generating discrete data structures but need to keep the order consistent between them. The entries in mylist.tbl can of course be extended to something much more complex.

test.cpp:

enum {
#undef XX
#define XX(name, ignore) name ,
#include "mylist.tbl"
  LAST_ENUM
};

char * enum_names [] = {
#undef XX
#define XX(name, ignore) #name ,
#include "mylist.tbl"
   "LAST_ENUM"
};

And then mylist.tbl:

/*    A = enum                  */
/*    B = some associated value */
/*     A        B   */
  XX( enum_1 , 100)
  XX( enum_2 , 100 )
  XX( enum_3 , 200 )
  XX( enum_4 , 900 )
  XX( enum_5 , 500 )

Unable to read data from the transport connection : An existing connection was forcibly closed by the remote host

If you have a https certificate on the domain, make sure you have the https binding to the domain name in IIS. In IIS -> Select your domain -> Click on Bindings Site Bindings Window opens up. Add a binding for https.

How to make scipy.interpolate give an extrapolated result beyond the input range?

You can take a look at InterpolatedUnivariateSpline

Here an example using it:

import matplotlib.pyplot as plt
import numpy as np
from scipy.interpolate import InterpolatedUnivariateSpline

# given values
xi = np.array([0.2, 0.5, 0.7, 0.9])
yi = np.array([0.3, -0.1, 0.2, 0.1])
# positions to inter/extrapolate
x = np.linspace(0, 1, 50)
# spline order: 1 linear, 2 quadratic, 3 cubic ... 
order = 1
# do inter/extrapolation
s = InterpolatedUnivariateSpline(xi, yi, k=order)
y = s(x)

# example showing the interpolation for linear, quadratic and cubic interpolation
plt.figure()
plt.plot(xi, yi)
for order in range(1, 4):
    s = InterpolatedUnivariateSpline(xi, yi, k=order)
    y = s(x)
    plt.plot(x, y)
plt.show()

How to create a temporary table in SSIS control flow task and then use it in data flow task?

Solution:

Set the property RetainSameConnection on the Connection Manager to True so that temporary table created in one Control Flow task can be retained in another task.

Here is a sample SSIS package written in SSIS 2008 R2 that illustrates using temporary tables.

Walkthrough:

Create a stored procedure that will create a temporary table named ##tmpStateProvince and populate with few records. The sample SSIS package will first call the stored procedure and then will fetch the temporary table data to populate the records into another database table. The sample package will use the database named Sora Use the below create stored procedure script.

USE Sora;
GO

CREATE PROCEDURE dbo.PopulateTempTable
AS
BEGIN
    
    SET NOCOUNT ON;

    IF OBJECT_ID('TempDB..##tmpStateProvince') IS NOT NULL
        DROP TABLE ##tmpStateProvince;

    CREATE TABLE ##tmpStateProvince
    (
            CountryCode     nvarchar(3)         NOT NULL
        ,   StateCode       nvarchar(3)         NOT NULL
        ,   Name            nvarchar(30)        NOT NULL
    );

    INSERT INTO ##tmpStateProvince 
        (CountryCode, StateCode, Name)
    VALUES
        ('CA', 'AB', 'Alberta'),
        ('US', 'CA', 'California'),
        ('DE', 'HH', 'Hamburg'),
        ('FR', '86', 'Vienne'),
        ('AU', 'SA', 'South Australia'),
        ('VI', 'VI', 'Virgin Islands');
END
GO

Create a table named dbo.StateProvince that will be used as the destination table to populate the records from temporary table. Use the below create table script to create the destination table.

USE Sora;
GO

CREATE TABLE dbo.StateProvince
(
        StateProvinceID int IDENTITY(1,1)   NOT NULL
    ,   CountryCode     nvarchar(3)         NOT NULL
    ,   StateCode       nvarchar(3)         NOT NULL
    ,   Name            nvarchar(30)        NOT NULL
    CONSTRAINT [PK_StateProvinceID] PRIMARY KEY CLUSTERED
        ([StateProvinceID] ASC)
) ON [PRIMARY];
GO

Create an SSIS package using Business Intelligence Development Studio (BIDS). Right-click on the Connection Managers tab at the bottom of the package and click New OLE DB Connection... to create a new connection to access SQL Server 2008 R2 database.

Connection Managers - New OLE DB Connection

Click New... on Configure OLE DB Connection Manager.

Configure OLE DB Connection Manager - New

Perform the following actions on the Connection Manager dialog.

  • Select Native OLE DB\SQL Server Native Client 10.0 from Provider since the package will connect to SQL Server 2008 R2 database
  • Enter the Server name, like MACHINENAME\INSTANCE
  • Select Use Windows Authentication from Log on to the server section or whichever you prefer.
  • Select the database from Select or enter a database name, the sample uses the database name Sora.
  • Click Test Connection
  • Click OK on the Test connection succeeded message.
  • Click OK on Connection Manager

Connection Manager

The newly created data connection will appear on Configure OLE DB Connection Manager. Click OK.

Configure OLE DB Connection Manager - Created

OLE DB connection manager KIWI\SQLSERVER2008R2.Sora will appear under the Connection Manager tab at the bottom of the package. Right-click the connection manager and click Properties

Connection Manager Properties

Set the property RetainSameConnection on the connection KIWI\SQLSERVER2008R2.Sora to the value True.

RetainSameConnection Property on Connection Manager

Right-click anywhere inside the package and then click Variables to view the variables pane. Create the following variables.

  • A new variable named PopulateTempTable of data type String in the package scope SO_5631010 and set the variable with the value EXEC dbo.PopulateTempTable.

  • A new variable named FetchTempData of data type String in the package scope SO_5631010 and set the variable with the value SELECT CountryCode, StateCode, Name FROM ##tmpStateProvince

Variables

Drag and drop an Execute SQL Task on to the Control Flow tab. Double-click the Execute SQL Task to view the Execute SQL Task Editor.

On the General page of the Execute SQL Task Editor, perform the following actions.

  • Set the Name to Create and populate temp table
  • Set the Connection Type to OLE DB
  • Set the Connection to KIWI\SQLSERVER2008R2.Sora
  • Select Variable from SQLSourceType
  • Select User::PopulateTempTable from SourceVariable
  • Click OK

Execute SQL Task Editor

Drag and drop a Data Flow Task onto the Control Flow tab. Rename the Data Flow Task as Transfer temp data to database table. Connect the green arrow from the Execute SQL Task to the Data Flow Task.

Control Flow Tab

Double-click the Data Flow Task to switch to Data Flow tab. Drag and drop an OLE DB Source onto the Data Flow tab. Double-click OLE DB Source to view the OLE DB Source Editor.

On the Connection Manager page of the OLE DB Source Editor, perform the following actions.

  • Select KIWI\SQLSERVER2008R2.Sora from OLE DB Connection Manager
  • Select SQL command from variable from Data access mode
  • Select User::FetchTempData from Variable name
  • Click Columns page

OLE DB Source Editor - Connection Manager

Clicking Columns page on OLE DB Source Editor will display the following error because the table ##tmpStateProvince specified in the source command variable does not exist and SSIS is unable to read the column definition.

Error message

To fix the error, execute the statement EXEC dbo.PopulateTempTable using SQL Server Management Studio (SSMS) on the database Sora so that the stored procedure will create the temporary table. After executing the stored procedure, click Columns page on OLE DB Source Editor, you will see the column information. Click OK.

OLE DB Source Editor - Columns

Drag and drop OLE DB Destination onto the Data Flow tab. Connect the green arrow from OLE DB Source to OLE DB Destination. Double-click OLE DB Destination to open OLE DB Destination Editor.

On the Connection Manager page of the OLE DB Destination Editor, perform the following actions.

  • Select KIWI\SQLSERVER2008R2.Sora from OLE DB Connection Manager
  • Select Table or view - fast load from Data access mode
  • Select [dbo].[StateProvince] from Name of the table or the view
  • Click Mappings page

OLE DB Destination Editor - Connection Manager

Click Mappings page on the OLE DB Destination Editor would automatically map the columns if the input and output column names are same. Click OK. Column StateProvinceID does not have a matching input column and it is defined as an IDENTITY column in database. Hence, no mapping is required.

OLE DB Destination Editor - Mappings

Data Flow tab should look something like this after configuring all the components.

Data Flow tab

Click the OLE DB Source on Data Flow tab and press F4 to view Properties. Set the property ValidateExternalMetadata to False so that SSIS would not try to check for the existence of the temporary table during validation phase of the package execution.

Set ValidateExternalMetadata

Execute the query select * from dbo.StateProvince in the SQL Server Management Studio (SSMS) to find the number of rows in the table. It should be empty before executing the package.

Rows in table before package execution

Execute the package. Control Flow shows successful execution.

Package Execution  - Control Flow tab

In Data Flow tab, you will notice that the package successfully processed 6 rows. The stored procedure created early in this posted inserted 6 rows into the temporary table.

Package Execution  - Data Flow tab

Execute the query select * from dbo.StateProvince in the SQL Server Management Studio (SSMS) to find the 6 rows successfully inserted into the table. The data should match with rows founds in the stored procedure.

Rows in table after package execution

The above example illustrated how to create and use temporary table within a package.

How to set level logging to DEBUG in Tomcat?

JULI logging levels for Tomcat

SEVERE - Serious failures

WARNING - Potential problems

INFO - Informational messages

CONFIG - Static configuration messages

FINE - Trace messages

FINER - Detailed trace messages

FINEST - Highly detailed trace messages

You can find here more https://documentation.progress.com/output/ua/OpenEdge_latest/index.html#page/pasoe-admin/tomcat-logging.html

Reporting Services Remove Time from DateTime in Expression

This should be done in the dataset. You could do this

Select CAST(CAST(YourDateTime as date) AS Varchar(11)) as DateColumnName

In SSRS Layout, just do this =Fields!DateColumnName.Value

PostgreSQL Autoincrement

This way will work for sure, I hope it helps:

CREATE TABLE fruits(
   id SERIAL PRIMARY KEY,
   name VARCHAR NOT NULL
);

INSERT INTO fruits(id,name) VALUES(DEFAULT,'apple');

or

INSERT INTO fruits VALUES(DEFAULT,'apple');

You can check this the details in the next link: http://www.postgresqltutorial.com/postgresql-serial/

Java Delegates?

Short story: ­­­­­­­­­­­­­­­­­­­no.

Introduction

The newest version of the Microsoft Visual J++ development environment supports a language construct called delegates or bound method references. This construct, and the new keywords delegate and multicast introduced to support it, are not a part of the JavaTM programming language, which is specified by the Java Language Specification and amended by the Inner Classes Specification included in the documentation for the JDKTM 1.1 software.

It is unlikely that the Java programming language will ever include this construct. Sun already carefully considered adopting it in 1996, to the extent of building and discarding working prototypes. Our conclusion was that bound method references are unnecessary and detrimental to the language. This decision was made in consultation with Borland International, who had previous experience with bound method references in Delphi Object Pascal.

We believe bound method references are unnecessary because another design alternative, inner classes, provides equal or superior functionality. In particular, inner classes fully support the requirements of user-interface event handling, and have been used to implement a user-interface API at least as comprehensive as the Windows Foundation Classes.

We believe bound method references are harmful because they detract from the simplicity of the Java programming language and the pervasively object-oriented character of the APIs. Bound method references also introduce irregularity into the language syntax and scoping rules. Finally, they dilute the investment in VM technologies because VMs are required to handle additional and disparate types of references and method linkage efficiently.

Difference between SelectedItem, SelectedValue and SelectedValuePath

inspired by this question I have written a blog along with the code snippet here. Below are some of the excerpts from the blog

SelectedItem – Selected Item helps to bind the actual value from the DataSource which will be displayed. This is of type object and we can bind any type derived from object type with this property. Since we will be using the MVVM binding for our combo boxes in that case this is the property which we can use to notify VM that item has been selected.

SelectedValue and SelectedValuePath – These are the two most confusing and misinterpreted properties for combobox. But these properties come to rescue when we want to bind our combobox with the value from already created object. Please check my last scenario in the following list to get a brief idea about the properties.

SQL Server - INNER JOIN WITH DISTINCT

You can use CTE to get the distinct values of the second table, and then join that with the first table. You also need to get the distinct values based on LastName column. You do this with a Row_Number() partitioned by the LastName, and sorted by the FirstName.

Here's the code

;WITH SecondTableWithDistinctLastName AS
(
        SELECT  *
        FROM    (
                    SELECT  *,
                            ROW_NUMBER() OVER (PARTITION BY LastName ORDER BY FirstName) AS [Rank]
                    FROM    AddTbl
                )   
        AS      tableWithRank
        WHERE   tableWithRank.[Rank] = 1
) 
SELECT          a.FirstName, a.LastName, S.District
FROM            SecondTableWithDistinctLastName AS S
INNER JOIN      AddTbl AS a
    ON          a.LastName = S.LastName
ORDER   BY      a.FirstName

What is the benefit of zerofill in MySQL?

When you select a column with type ZEROFILL it pads the displayed value of the field with zeros up to the display width specified in the column definition. Values longer than the display width are not truncated. Note that usage of ZEROFILL also implies UNSIGNED.

Using ZEROFILL and a display width has no effect on how the data is stored. It affects only how it is displayed.

Here is some example SQL that demonstrates the use of ZEROFILL:

CREATE TABLE yourtable (x INT(8) ZEROFILL NOT NULL, y INT(8) NOT NULL);
INSERT INTO yourtable (x,y) VALUES
(1, 1),
(12, 12),
(123, 123),
(123456789, 123456789);
SELECT x, y FROM yourtable;

Result:

        x          y
 00000001          1
 00000012         12
 00000123        123
123456789  123456789

Installing Python 2.7 on Windows 8

i'm using python 2.7 in win 8 too but no problem with that. maybe you need to reastart your computer like wclear said, or you can run python command line program that included in python installation folder, i think below IDLE program. hope it help.

Get the index of the object inside an array, matching a condition

function findIndexByKeyValue(_array, key, value) {
    for (var i = 0; i < _array.length; i++) { 
        if (_array[i][key] == value) {
            return i;
        }
    }
    return -1;
}
var a = [
    {prop1:"abc",prop2:"qwe"},
    {prop1:"bnmb",prop2:"yutu"},
    {prop1:"zxvz",prop2:"qwrq"}];
var index = findIndexByKeyValue(a, 'prop2', 'yutu');
console.log(index);

Insert current date in datetime format mySQL

you can use CURRENT_TIMESTAMP, mysql function

How to compare variables to undefined, if I don’t know whether they exist?

if (document.getElementById('theElement')) // do whatever after this

For undefined things that throw errors, test the property name of the parent object instead of just the variable name - so instead of:

if (blah) ...

do:

if (window.blah) ...

How can I add a string to the end of each line in Vim?

You don't really need the g at the end. So it becomes:

:%s/$/*

Or if you just want the * at the end of, say lines 14-18:

:14,18s/$/*

or

:14,18norm A*

Java word count program

To count specified words only like John, John99, John_John and John's only. Change regex according to yourself and count the specified words only.

    public static int wordCount(String content) {
        int count = 0;
        String regex = "([a-zA-Z_’][0-9]*)+[\\s]*";     
        Pattern pattern = Pattern.compile(regex);
        Matcher matcher = pattern.matcher(content);
        while(matcher.find()) {
            count++;
            System.out.println(matcher.group().trim()); //If want to display the matched words
        }
        return count;
    }

How to stop a function

def player(game_over):
    do something here
    game_over = check_winner() #Here we tell check_winner to run and tell us what game_over should be, either true or false
    if not game_over: 
        computer(game_over)  #We are only going to do this if check_winner comes back as False

def check_winner(): 
    check something
    #here needs to be an if / then statement deciding if the game is over, return True if over, false if not
    if score == 100:
        return True
    else:
        return False

def computer(game_over):
    do something here
    game_over = check_winner() #Here we tell check_winner to run and tell us what game_over should be, either true or false
    if not game_over:
        player(game_over) #We are only going to do this if check_winner comes back as False

game_over = False   #We need a variable to hold wether the game is over or not, we'll start it out being false.
player(game_over)   #Start your loops, sending in the status of game_over

Above is a pretty simple example... I made up a statement for check_winner using score = 100 to denote the game being over.

You will want to use similar method of passing score into check_winner, using game_over = check_winner(score). Then you can create a score at the beginning of your program and pass it through to computer and player just like game_over is being handled.

Is there a way to set background-image as a base64 encoded image?

Had the same problem with base64. For anyone in the future with the same problem:

url = "data:image/png;base64,iVBORw0KGgoAAAAAAAAyCAYAAAAUYybjAAAgAElE...";

This would work executed from console, but not from within a script:

$img.css("background-image", "url('" + url + "')");

But after playing with it a bit, I came up with this:

var img = new Image();
img.src = url;
$img.css("background-image", "url('" + img.src + "')");

No idea why it works with a proxy image, but it does. Tested on Firefox Dev 37 and Chrome 40.

Hope it helps someone.

EDIT

Investigated a little bit further. It appears that sometimes base64 encoding (at least in my case) breaks with CSS because of line breaks present in the encoded value (in my case value was generated dynamically by ActionScript).

Simply using e.g.:

$img.css("background-image", "url('" + url.replace(/(\r\n|\n|\r)/gm, "") + "')");

works too, and even seems to be faster by a few ms than using a proxy image.

How to set ANDROID_HOME path in ubuntu?

In the console just type these :

export ANDROID_HOME=$HOME/Android/Sdk
export PATH=$PATH:$ANDROID_HOME/tools

If you want to make it permanent just add those lines in the ~/.bashrc file

PostgreSQL psql terminal command

psql --pset=format=FORMAT

Great for executing queries from command line, e.g.

psql --pset=format=unaligned -c "select bandanavalue from bandana where bandanakey = 'atlassian.confluence.settings';"

Eclipse Java Missing required source folder: 'src'

I was confused by this for hours.

Right click on project -> Build Path -> Configure Build Path -> Add Folder

How to display the function, procedure, triggers source code in postgresql?

Slightly more than just displaying the function, how about getting the edit in-place facility as well.

\ef <function_name> is very handy. It will open the source code of the function in editable format. You will not only be able to view it, you can edit and execute it as well.

Just \ef without function_name will open editable CREATE FUNCTION template.

For further reference -> https://www.postgresql.org/docs/9.6/static/app-psql.html

How to compile without warnings being treated as errors?

Remove -Werror from your Make or CMake files, as suggested in this post

how to get all child list from Firebase android

I did something like this :

Firebase ref = new Firebase(FIREBASE_URL);
ref.addValueEventListener(new ValueEventListener() {

@Override
public void onDataChange(DataSnapshot snapshot) {
    Map<String, Object> objectMap = (HashMap<String, Object>)    
             dataSnapshot.getValue();
    List<Match> = new ArrayList<Match>();
    for (Object obj : objectMap.values()) {
        if (obj instanceof Map) {
            Map<String, Object> mapObj = (Map<String, Object>) obj;
            Match match = new Match();
            match.setSport((String) mapObj.get(Constants.SPORT));
            match.setPlayingWith((String) mapObj.get(Constants.PLAYER));
            list.add(match);
        }
    }
  }
  @Override
  public void onCancelled(FirebaseError firebaseError) {

  }
});

How to simulate a click by using x,y coordinates in JavaScript?

For security reasons, you can't move the mouse pointer with javascript, nor simulate a click with it.

What is it that you are trying to accomplish?

How to enable file sharing for my app?

New XCode 7 will only require 'UIFileSharingEnabled' key in Info.plist. 'CFBundleDisplayName' is not required any more.

One more hint: do not only modify the Info.plist of the 'tests' target. The main app and the 'tests' have different Info.plist.

How do I use regular expressions in bash scripts?

You need spaces around the operator =~

i="test"
if [[ $i =~ "200[78]" ]];
then
  echo "OK"
else
  echo "not OK"
fi

Use of "this" keyword in C++

this points to the object in whose member function it is reffered, so it is optional.

Why can't a text column have a default value in MySQL?

For Ubuntu 16.04:

How to disable strict mode in MySQL 5.7:

Edit file /etc/mysql/mysql.conf.d/mysqld.cnf

If below line exists in mysql.cnf

sql-mode="STRICT_TRANS_TABLES,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION"

Then Replace it with

sql_mode='MYSQL40'

Otherwise

Just add below line in mysqld.cnf

sql_mode='MYSQL40'

This resolved problem.

Updating and committing only a file's permissions using git version control

@fooMonster article worked for me

# git ls-tree HEAD
100644 blob 55c0287d4ef21f15b97eb1f107451b88b479bffe    script.sh

As you can see the file has 644 permission (ignoring the 100). We would like to change it to 755:

# git update-index --chmod=+x script.sh

commit the changes

# git commit -m "Changing file permissions"
[master 77b171e] Changing file permissions
0 files changed, 0 insertions(+), 0 deletions(-)
mode change 100644 => 100755 script.sh