try this , it works 100 % : add columns and rows programatically : you need to create item class at first :
public class Item
{
public int Num { get; set; }
public string Start { get; set; }
public string Finich { get; set; }
}
private void generate_columns()
{
DataGridTextColumn c1 = new DataGridTextColumn();
c1.Header = "Num";
c1.Binding = new Binding("Num");
c1.Width = 110;
dataGrid1.Columns.Add(c1);
DataGridTextColumn c2 = new DataGridTextColumn();
c2.Header = "Start";
c2.Width = 110;
c2.Binding = new Binding("Start");
dataGrid1.Columns.Add(c2);
DataGridTextColumn c3 = new DataGridTextColumn();
c3.Header = "Finich";
c3.Width = 110;
c3.Binding = new Binding("Finich");
dataGrid1.Columns.Add(c3);
dataGrid1.Items.Add(new Item() { Num = 1, Start = "2012, 8, 15", Finich = "2012, 9, 15" });
dataGrid1.Items.Add(new Item() { Num = 2, Start = "2012, 12, 15", Finich = "2013, 2, 1" });
dataGrid1.Items.Add(new Item() { Num = 3, Start = "2012, 8, 1", Finich = "2012, 11, 15" });
}
public class EMView extends Activity {
ImageView img,img1;
int column_index;
Intent intent=null;
// Declare our Views, so we can access them later
String logo,imagePath,Logo;
Cursor cursor;
//YOU CAN EDIT THIS TO WHATEVER YOU WANT
private static final int SELECT_PICTURE = 1;
String selectedImagePath;
//ADDED
String filemanagerstring;
@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.main);
img= (ImageView)findViewById(R.id.gimg1);
((Button) findViewById(R.id.Button01))
.setOnClickListener(new OnClickListener() {
public void onClick(View arg0) {
// in onCreate or any event where your want the user to
// select a file
Intent intent = new Intent();
intent.setType("image/*");
intent.setAction(Intent.ACTION_GET_CONTENT);
startActivityForResult(Intent.createChooser(intent,
"Select Picture"), SELECT_PICTURE);
}
});
}
//UPDATED
@Override
public void onActivityResult(int requestCode, int resultCode, Intent data) {
if (resultCode == Activity.RESULT_OK) {
if (requestCode == SELECT_PICTURE) {
Uri selectedImageUri = data.getData();
//OI FILE Manager
filemanagerstring = selectedImageUri.getPath();
//MEDIA GALLERY
selectedImagePath = getPath(selectedImageUri);
img.setImageURI(selectedImageUri);
imagePath.getBytes();
TextView txt = (TextView)findViewById(R.id.title);
txt.setText(imagePath.toString());
Bitmap bm = BitmapFactory.decodeFile(imagePath);
// img1.setImageBitmap(bm);
}
}
}
//UPDATED!
public String getPath(Uri uri) {
String[] projection = { MediaColumns.DATA };
Cursor cursor = managedQuery(uri, projection, null, null, null);
column_index = cursor
.getColumnIndexOrThrow(MediaColumns.DATA);
cursor.moveToFirst();
imagePath = cursor.getString(column_index);
return cursor.getString(column_index);
}
}
I found it hard to decipher what is meant by "working directory of the VM". In my example, I was using the Java Service Wrapper program to execute a jar - the dump files were created in the directory where I had placed the wrapper program, e.g. c:\myapp\bin. The reason I discovered this is because the files can be quite large and they filled up the hard drive before I discovered their location.
You should use the Time.IsZero() function instead:
func (Time) IsZero
func (t Time) IsZero() bool
IsZero reports whether t represents the zero time instant, January 1, year 1, 00:00:00 UTC.
First off, you shouldn't add $
when you're outside of strings ($class
in your first function being an exception), so it should be:
def doCopyMibArtefactsHere(projectName) {
step ([
$class: 'CopyArtifact',
projectName: projectName,
filter: '**/**.mib',
fingerprintArtifacts: true,
flatten: true
]);
}
def BuildAndCopyMibsHere(projectName, params) {
build job: project, parameters: params
doCopyMibArtefactsHere(projectName)
}
...
Now, as for your problem; the second function takes two arguments while you're only supplying one argument at the call. Either you have to supply two arguments at the call:
...
node {
stage('Prepare Mib'){
BuildAndCopyMibsHere('project1', null)
}
}
... or you need to add a default value to the functions' second argument:
def BuildAndCopyMibsHere(projectName, params = null) {
build job: project, parameters: params
doCopyMibArtefactsHere($projectName)
}
Data can be pulled into an excel from another excel through Workbook method or External reference or through Data Import facility.
If you want to read or even if you want to update another excel workbook, these methods can be used. We may not depend only on VBA for this.
For more info on these techniques, please click here to refer the article
string today = DateTime.Today.ToString("M/d");
SELECT vg.id,
vg.name
FROM v_groups vg INNER JOIN
people2v_groups p2vg ON vg.id = p2vg.v_group_id
WHERE p2vg.people_id = 0;
In a static class, keep a static const integer, then add 1 to it before every single access (using a public get property). This will ensure you cycle the whole int range before you get a non-unique value.
/// <summary>
/// The command id to use. This is a thread-safe id, that is unique over the lifetime of the process. It changes
/// at each access.
/// </summary>
internal static int NextCommandId
{
get
{
return _nextCommandId++;
}
}
private static int _nextCommandId = 0;
This will produce a unique integer value within a running process. Since you do not explicitly define how unique your integer should be, this will probably fit.
you can try this... ALTER TABLE Your_Table ADD table_ID int NOT NULL PRIMARY KEY auto_increment;
<input name="searchbox" onfocus="if (this.value=='search') this.value = ''" onblur="if (this.value=='') this.value = 'search'" type="text" value="search">
Add an onblur event too.
http://www.internetdoc.info/javascript-function/remove-key-from-array.htm
removeKey(arrayName,key);
function removeKey(arrayName,key)
{
var x;
var tmpArray = new Array();
for(x in arrayName)
{
if(x!=key) { tmpArray[x] = arrayName[x]; }
}
return tmpArray;
}
You can create a PrintStream wrapping around your OutputStream and then just call it's print(String):
final OutputStream os = new FileOutputStream("/tmp/out");
final PrintStream printStream = new PrintStream(os);
printStream.print("String");
printStream.close();
Use this:
<script type="text/javascript" language="javascript">
$("#toggle").click(function() { $("#login-form, #recover-password").toggle(); });
</script>
Your HTML should look like:
<a id="toggle" href="javascript:void(0);">forgot password?</a>
<div id="login-form"></div>
<div id="recover-password" style="display:none;"></div>
Hey, all right! One line! I <3 jQuery.
New APIs have been added since the answer was selected; You can now use NSURLUtilities. Since different parts of URLs allow different characters, use the applicable character set. The following example encodes for inclusion in the query string:
encodedString = [myString stringByAddingPercentEncodingWithAllowedCharacters:NSCharacterSet.URLQueryAllowedCharacterSet];
To specifically convert '&', you'll need to remove it from the url query set or use a different set, as '&' is allowed in a URL query:
NSMutableCharacterSet *chars = NSCharacterSet.URLQueryAllowedCharacterSet.mutableCopy;
[chars removeCharactersInRange:NSMakeRange('&', 1)]; // %26
encodedString = [myString stringByAddingPercentEncodingWithAllowedCharacters:chars];
You can also use rcParams
to change the font family globally.
import matplotlib.pyplot as plt
plt.rcParams["font.family"] = "cursive"
# This will change to your computer's default cursive font
The list of matplotlib's font family arguments is here.
Use request.args
to get parsed contents of query string:
from flask import request
@app.route(...)
def login():
username = request.args.get('username')
password = request.args.get('password')
I am using as what John Rasch said:
Stream streamContent = taxformUpload.FileContent;
Not a definite answer but too much to fit in comments:
I hypothesize they gave you a cert that either has a wrong issuer (although their server could use a more specific alert code for that) or a wrong subject. We know the cert matches your privatekey -- because both curl
and openssl client
paired them without complaining about a mismatch; but we don't actually know it matches their desired CA(s) -- because your curl uses openssl and openssl SSL client does NOT enforce that a configured client cert matches certreq.CAs.
Do openssl x509 <clientcert.pem -noout -subject -issuer
and the same on the cert from the test P12 that works. Do openssl s_client
(or check the one you did) and look under Acceptable client certificate CA names
; the name there or one of them should match (exactly!) the issuer(s) of your certs. If not, that's most likely your problem and you need to check with them you submitted your CSR to the correct place and in the correct way. Perhaps they have different regimes in different regions, or business lines, or test vs prod, or active vs pending, etc.
If the issuer of your cert does match desiredCAs, compare its subject to the working (test-P12) one: are they in similar format? are there any components in the working one not present in yours? If they allow it, try generating and submitting a new CSR with a subject name exactly the same as the test-P12 one, or as close as you can get, and see if that produces a cert that works better. (You don't have to generate a new key to do this, but if you choose to, keep track of which certs match which keys so you don't get them mixed up.) If that doesn't help look at the certificate extensions with openssl x509 <cert -noout -text
for any difference(s) that might reasonably be related to subject authorization, like KeyUsage, ExtendedKeyUsage, maybe Policy, maybe Constraints, maybe even something nonstandard.
If all else fails, ask the server operator(s) what their logs say about the problem, or if you have access look at the logs yourself.
There are two things to learn from this exercise :
You need to iterate through the elements of the array somehow - you can do this with a for loop or a while loop. You need to store the result of the summation in an accumulator. For this, you need to create a variable.
int accumulator = 0;
for(int i = 0; i < myArray.length; i++) {
accumulator += myArray[i];
}
even simpler, adding up to String[]
,
use built-in filter filter(StringUtils::isNotEmpty)
of org.apache.commons.lang3
import org.apache.commons.lang3.StringUtils;
String test = "a\nb\n\nc\n";
String[] lines = test.split("\\n", -1);
String[] result = Arrays.stream(lines).filter(StringUtils::isNotEmpty).toArray(String[]::new);
System.out.println(Arrays.toString(lines));
System.out.println(Arrays.toString(result));
and output:
[a, b, , c, ]
[a, b, c]
Slice notation a[start_index:end_index:step]
return a[::2]
where start_index
defaults to 0
and end_index
defaults to the len(a)
.
You can invert the hash. clients.invert["client_id"=>"2180"]
returns "orange"
PostgreSQL drivers jars are included in Central Repository of Maven:
For PostgreSQL up to 9.1, use:
<dependency>
<groupId>postgresql</groupId>
<artifactId>postgresql</artifactId>
<version>VERSION</version>
</dependency>
or for 9.2+
<dependency>
<groupId>org.postgresql</groupId>
<artifactId>postgresql</artifactId>
<version>VERSION</version>
</dependency>
(Thanks to @Caspar for the correction)
Now that most people use Kotlin instead of Java, here is the code in Kotlin for converting a bitmap into a base64 string.
import java.io.ByteArrayOutputStream
private fun encodeImage(bm: Bitmap): String? {
val baos = ByteArrayOutputStream()
bm.compress(Bitmap.CompressFormat.JPEG, 100, baos)
val b = baos.toByteArray()
return Base64.encodeToString(b, Base64.DEFAULT)
}
Sometimes you can get away with the tee function from itertools, it returns multiple iterators for the same generator that can be used independently.
=> just what jay said just delete those registry entries which are pointing to other paths other than on c:\windows\system32.Those are the culprits of the error.I got those errors on my vb6 IDE and after deleting those anomalous registry entries the problem was fixed. works like a charm.
Try mono:
http://www.go-mono.com/mono-downloads/download.html
This download works on all versions of Windows XP, 2003, Vista and Windows 7.
Why your code does not work?
The where
method returns an ActiveRecord::Relation object (acts like an array which contains the results of the where
), it can be empty but it will never be nil
.
Business.where(id: -1)
#=> returns an empty ActiveRecord::Relation ( similar to an array )
Business.where(id: -1).nil? # ( similar to == nil? )
#=> returns false
Business.where(id: -1).empty? # test if the array is empty ( similar to .blank? )
#=> returns true
Option 1: Using .exists?
if Business.exists?(user_id: current_user.id)
# same as Business.where(user_id: current_user.id).exists?
# ...
else
# ...
end
Option 2: Using .present?
(or .blank?
, the opposite of .present?
)
if Business.where(:user_id => current_user.id).present?
# less efficiant than using .exists? (see generated SQL for .exists? vs .present?)
else
# ...
end
Option 3: Variable assignment in the if statement
if business = Business.where(:user_id => current_user.id).first
business.do_some_stuff
else
# do something else
end
This option can be considered a code smell by some linters (Rubocop for example).
Option 3b: Variable assignment
business = Business.where(user_id: current_user.id).first
if business
# ...
else
# ...
end
You can also use .find_by_user_id(current_user.id)
instead of .where(...).first
Best option:
Business
object(s): Option 1Business
object(s): Option 3For +300k concurrent connection:
Set these variables in /etc/sysctl.conf
:
fs.file-max = 10000000
fs.nr_open = 10000000
Also, change these variables in /etc/security/limits.conf
:
* soft nofile 10000000
* hard nofile 10000000
root soft nofile 10000000
root hard nofile 10000000
And finally, increase TCP buffers in /etc/sysctl.conf
, too:
net.ipv4.tcp_mem = 786432 1697152 1945728
net.ipv4.tcp_rmem = 4096 4096 16777216
net.ipv4.tcp_wmem = 4096 4096 16777216
for more information please refer to this.
You had several issues with your code.
1) Missing a closing brace, }
, within your rules
.
2) In this case, there is no reason to use a function for the required
rule. By default, the plugin can handle checkbox
and radio
inputs just fine, so using true
is enough. However, this will simply do the same logic as in your original function and verify that at least one is checked.
3) If you also want only a maximum of two to be checked, then you'll need to apply the maxlength
rule.
4) The messages
option was missing the rule specification. It will work, but the one custom message would apply to all rules on the same field.
5) If a name
attribute contains brackets, you must enclose it within quotes.
DEMO: http://jsfiddle.net/K6Wvk/
$(document).ready(function () {
$('#formid').validate({ // initialize the plugin
rules: {
'test[]': {
required: true,
maxlength: 2
}
},
messages: {
'test[]': {
required: "You must check at least 1 box",
maxlength: "Check no more than {0} boxes"
}
}
});
});
Here's how:
// Create an array with room for 100 integers
int[] nums = new int[100];
// Fill it with numbers using a for-loop
for (int i = 0; i < nums.length; i++)
nums[i] = i + 1; // +1 since we want 1-100 and not 0-99
// Compute sum
int sum = 0;
for (int n : nums)
sum += n;
// Print the result (5050)
System.out.println(sum);
For CSS that are reused among the entire site I define them in the <head>
section of the _Layout
:
<head>
<link href="@Url.Content("~/Styles/main.css")" rel="stylesheet" type="text/css" />
@RenderSection("Styles", false)
</head>
and if I need some view specific styles I define the Styles
section in each view:
@section Styles {
<link href="@Url.Content("~/Styles/view_specific_style.css")" rel="stylesheet" type="text/css" />
}
Edit: It's useful to know that the second parameter in @RenderSection, false, means that the section is not required on a view that uses this master page, and the view engine will blissfully ignore the fact that there is no "Styles" section defined in your view. If true, the view won't render and an error will be thrown unless the "Styles" section has been defined.
If you find that the pretty_generate
option built into Ruby's JSON library is not "pretty" enough, I recommend my own NeatJSON gem for your formatting.
To use it:
gem install neatjson
and then use
JSON.neat_generate
instead of
JSON.pretty_generate
Like Ruby's pp
it will keep objects and arrays on one line when they fit, but wrap to multiple as needed. For example:
{
"navigation.createroute.poi":[
{"text":"Lay in a course to the Hilton","params":{"poi":"Hilton"}},
{"text":"Take me to the airport","params":{"poi":"airport"}},
{"text":"Let's go to IHOP","params":{"poi":"IHOP"}},
{"text":"Show me how to get to The Med","params":{"poi":"The Med"}},
{"text":"Create a route to Arby's","params":{"poi":"Arby's"}},
{
"text":"Go to the Hilton by the Airport",
"params":{"poi":"Hilton","location":"Airport"}
},
{
"text":"Take me to the Fry's in Fresno",
"params":{"poi":"Fry's","location":"Fresno"}
}
],
"navigation.eta":[
{"text":"When will we get there?"},
{"text":"When will I arrive?"},
{"text":"What time will I get to the destination?"},
{"text":"What time will I reach the destination?"},
{"text":"What time will it be when I arrive?"}
]
}
It also supports a variety of formatting options to further customize your output. For example, how many spaces before/after colons? Before/after commas? Inside the brackets of arrays and objects? Do you want to sort the keys of your object? Do you want the colons to all be lined up?
I have the same problem, but it just gets slow when trying to stop debugging in Visual Studio 2013, and I try this:
Whoever wants a synchronous alternative to the accepted answer (I know I did):
var fs = require('fs');
var path = require('path');
var walk = function(dir) {
let results = [], err = null, list;
try {
list = fs.readdirSync(dir)
} catch(e) {
err = e.toString();
}
if (err) return err;
var i = 0;
return (function next() {
var file = list[i++];
if(!file) return results;
file = path.resolve(dir, file);
let stat = fs.statSync(file);
if (stat && stat.isDirectory()) {
let res = walk(file);
results = results.concat(res);
return next();
} else {
results.push(file);
return next();
}
})();
};
console.log(
walk("./")
)
var arr = ['a','b',1];
var results = arr.filter(function(item){
if(typeof item ==='string'){return item;}
});
First, Latin-1 is the same as ISO-8859-1, so, the default was already OK for you. Right?
You successfully set the encoding to ISO-8859-1 with your command line parameter. You also set it programmatically to "Latin-1", but, that's not a recognized value of a file encoding for Java. See http://java.sun.com/javase/6/docs/technotes/guides/intl/encoding.doc.html
When you do that, looks like Charset resets to UTF-8, from looking at the source. That at least explains most of the behavior.
I don't know why OutputStreamWriter shows ISO8859_1. It delegates to closed-source sun.misc.* classes. I'm guessing it isn't quite dealing with encoding via the same mechanism, which is weird.
But of course you should always be specifying what encoding you mean in this code. I'd never rely on the platform default.
To achieve this you can try below steps:
in my case, I was not writing reg_url with :8080 . String reg_url = "http://192.168.29.163:8080/register.php";
with new version of mongodb, this issue got resolved.
For me the query that is working, if I want to compare with DrawDate for example is:
CAST(DrawDate AS DATE) = CAST (GETDATE() as DATE)
This is comparing results with today's date.
or the whole query:
SELECT TOP (1000) *
FROM test
where DrawName != 'NULL' and CAST(DrawDate AS DATE) = CAST (GETDATE() as DATE)
order by id desc
To load a .docx file:
if let htmlFile = Bundle.main.path(forResource: "fileName", ofType: "docx") {
let url = URL(fileURLWithPath: htmlFile)
do{
let data = try Data(contentsOf: url)
self.webView.load(data, mimeType: "application/vnd.openxmlformats-officedocument.wordprocessingml.document", textEncodingName: "UTF-8", baseURL: url)
}catch{
print("errrr")
}
}
Note: If this doesn't help, check below for Ricardo Martins' answer.
Create a PHP script with <?php phpinfo() ?>
in it, run that from your browser, and look for the value Loaded Configuration File
. This tells you which php.ini
file PHP is using in the context of the web server.
A good algorithm to securely hash data is BCrypt:
Besides incorporating a salt to protect against rainbow table attacks, bcrypt is an adaptive function: over time, the iteration count can be increased to make it slower, so it remains resistant to brute-force search attacks even with increasing computation power.
There's a nice .NET implementation of BCrypt that is available also as a NuGet package.
For Python 3.x, use input()
. For Python 2.x, use raw_input()
. Don't forget you can add a prompt string in your input()
call to create one less print statement. input("GUESS THAT NUMBER!")
.
You could also make it a post-build step. Setup a new configuration which is "Deploy" in addition to Debug and Release, and then have the post-build step copy over the correct web.config.
We use automated builds for all of our projects, and with those the build script updates the web.config file to point to the correct location. But that won't help you if you are doing everything from VS.
Go to data_dir
and remove the Your_table.TMP
file after repairing <Your_table>
table.
First import Corelocation and MapKit library:
import MapKit
import CoreLocation
inherit from CLLocationManagerDelegate to our class
class ViewController: UIViewController, CLLocationManagerDelegate
create a locationManager variable, this will be your location data
var locationManager = CLLocationManager()
create a function to get the location info, be specific this exact syntax works:
func locationManager(manager: CLLocationManager, didUpdateLocations locations: [CLLocation]) {
in your function create a constant for users current location
let userLocation:CLLocation = locations[0] as CLLocation // note that locations is same as the one in the function declaration
stop updating location, this prevents your device from constantly changing the Window to center your location while moving (you can omit this if you want it to function otherwise)
manager.stopUpdatingLocation()
get users coordinate from userLocatin you just defined:
let coordinations = CLLocationCoordinate2D(latitude: userLocation.coordinate.latitude,longitude: userLocation.coordinate.longitude)
define how zoomed you want your map be:
let span = MKCoordinateSpanMake(0.2,0.2)
combine this two to get region:
let region = MKCoordinateRegion(center: coordinations, span: span)//this basically tells your map where to look and where from what distance
now set the region and choose if you want it to go there with animation or not
mapView.setRegion(region, animated: true)
close your function
}
from your button or another way you want to set the locationManagerDeleget to self
now allow the location to be shown
designate accuracy
locationManager.desiredAccuracy = kCLLocationAccuracyBest
authorize:
locationManager.requestWhenInUseAuthorization()
to be able to authorize location service you need to add this two lines to your plist
get location:
locationManager.startUpdatingLocation()
show it to the user:
mapView.showsUserLocation = true
This is my complete code:
import UIKit
import MapKit
import CoreLocation
class ViewController: UIViewController, CLLocationManagerDelegate {
@IBOutlet weak var mapView: MKMapView!
var locationManager = CLLocationManager()
override func viewDidLoad() {
super.viewDidLoad()
// Do any additional setup after loading the view, typically from a nib.
}
override func didReceiveMemoryWarning() {
super.didReceiveMemoryWarning()
// Dispose of any resources that can be recreated.
}
@IBAction func locateMe(sender: UIBarButtonItem) {
locationManager.delegate = self
locationManager.desiredAccuracy = kCLLocationAccuracyBest
locationManager.requestWhenInUseAuthorization()
locationManager.startUpdatingLocation()
mapView.showsUserLocation = true
}
func locationManager(manager: CLLocationManager, didUpdateLocations locations: [CLLocation]) {
let userLocation:CLLocation = locations[0] as CLLocation
manager.stopUpdatingLocation()
let coordinations = CLLocationCoordinate2D(latitude: userLocation.coordinate.latitude,longitude: userLocation.coordinate.longitude)
let span = MKCoordinateSpanMake(0.2,0.2)
let region = MKCoordinateRegion(center: coordinations, span: span)
mapView.setRegion(region, animated: true)
}
}
In order to generate the value use
private static final long serialVersionUID = $randomLong$L;
$END$
and provide the randomLong template variable with the following value: groovyScript("new Random().nextLong().abs()")
https://pharsfalvi.wordpress.com/2015/03/18/adding-serialversionuid-in-idea/
UPDATE t1
INNER JOIN t2 ON t2.t1_id = t1.id
INNER JOIN t3 ON t2.t3_id = t3.id
SET t1.a = 'something',
t2.b = 42,
t3.c = t2.c
WHERE t1.a = 'blah';
To see what this is going to update, you can convert this into a select statement, e.g.:
SELECT t2.t1_id, t2.t3_id, t1.a, t2.b, t2.c AS t2_c, t3.c AS t3_c
FROM t1
INNER JOIN t2 ON t2.t1_id = t1.id
INNER JOIN t3 ON t2.t3_id = t3.id
WHERE t1.a = 'blah';
An example using the same tables as the other answer:
SELECT Books.BookID, Orders.OrderID,
Orders.Quantity AS CurrentQuantity,
Orders.Quantity + 2 AS NewQuantity,
Books.InStock AS CurrentStock,
Books.InStock - 2 AS NewStock
FROM Books
INNER JOIN Orders ON Books.BookID = Orders.BookID
WHERE Orders.OrderID = 1002;
UPDATE Books
INNER JOIN Orders ON Books.BookID = Orders.BookID
SET Orders.Quantity = Orders.Quantity + 2,
Books.InStock = Books.InStock - 2
WHERE Orders.OrderID = 1002;
EDIT:
Just for fun, let's add something a bit more interesting.
Let's say you have a table of books
and a table of authors
. Your books
have an author_id
. But when the database was originally created, no foreign key constraints were set up and later a bug in the front-end code caused some books to be added with invalid author_id
s. As a DBA you don't want to have to go through all of these books
to check what the author_id
should be, so the decision is made that the data capturers will fix the books
to point to the right authors
. But there are too many books to go through each one and let's say you know that the ones that have an author_id
that corresponds with an actual author
are correct. It's just the ones that have nonexistent author_id
s that are invalid. There is already an interface for the users to update the book details and the developers don't want to change that just for this problem. But the existing interface does an INNER JOIN authors
, so all of the books with invalid authors are excluded.
What you can do is this: Insert a fake author record like "Unknown author". Then update the author_id
of all the bad records to point to the Unknown author. Then the data capturers can search for all books with the author set to "Unknown author", look up the correct author and fix them.
How do you update all of the bad records to point to the Unknown author? Like this (assuming the Unknown author's author_id
is 99999):
UPDATE books
LEFT OUTER JOIN authors ON books.author_id = authors.id
SET books.author_id = 99999
WHERE authors.id IS NULL;
The above will also update books
that have a NULL
author_id
to the Unknown author. If you don't want that, of course you can add AND books.author_id IS NOT NULL
.
We facing this issue today and we solved it. what we do ?
leaflet map load div is below.
<div id="map_container">
<div id="listing_map" class="right_listing"></div>
</div>
When form input change or submit we follow this step below. after leaflet map container removed in my page and create new again.
$( '#map_container' ).html( ' ' ).append( '<div id="listing_map" class="right_listing"></div>' );
After this code my leaflet map is working fine with form filter to reload again.
Thank you.
I know I am late to the party, but I'll quote like they say, "better be late than never". So let us get going, Everybody has their own ways to explain things, let me try to sum it up and simple it up for you in a few steps with an example; Suppose you have a simple form, form.jsp
<form:form action="processForm" modelAttribute="student">
First Name : <form:input path="firstName" />
<br><br>
Last Name : <form:input path="lastName" />
<br><br>
<input type="submit" value="submit"/>
</form:form>
path="firstName" path="lastName" These are the fields/properties in the StudentClass when the form is called their getters are called but once submitted their setters are called and their values are set in the bean that was indicated in the modelAttribute="student" in the form tag.
We have StudentController that includes the following methods;
@RequestMapping("/showForm")
public String showForm(Model theModel){ //Model is used to pass data between
//controllers and views
theModel.addAttribute("student", new Student()); //attribute name, value
return "form";
}
@RequestMapping("/processForm")
public String processForm(@ModelAttribute("student") Student theStudent){
System.out.println("theStudent :"+ theStudent.getLastName());
return "form-details";
}
//@ModelAttribute("student") Student theStudent
//Spring automatically populates the object data with form data all behind the
//scenes
now finally we have a form-details.jsp
<b>Student Information</b>
${student.firstName}
${student.lastName}
So back to the question What is @ModelAttribute in Spring MVC? A sample definition from the source for you, http://www.baeldung.com/spring-mvc-and-the-modelattribute-annotation The @ModelAttribute is an annotation that binds a method parameter or method return value to a named model attribute and then exposes it to a web view.
What actually happens is it gets all the values of your form those were submitted by it and then holds them for you to bind or assign them to the object. It works same like the @RequestParameter where we only get a parameter and assign the value to some field. Only difference is @ModelAttribute holds all form data rather than a single parameter. It creates a bean for you that holds form submitted data to be used by the developer later on.
To recap the whole thing. Step 1 : A request is sent and our method showForm runs and a model, a temporary bean is set with the name student is forwarded to the form. theModel.addAttribute("student", new Student());
Step 2 : modelAttribute="student" on form submission model changes the student and now it holds all parameters of the form
Step 3 : @ModelAttribute("student") Student theStudent We fetch the values being hold by @ModelAttribute and assign the whole bean/object to Student.
Step 4 : And then we use it as we bid, just like showing it on the page etc like I did
I hope it helps you to understand the concept. Thanks
You can use FileInfo object to get all information of your file.
FileInfo f = new FileInfo(@"C:\Hello\AnotherFolder\The File Name.PDF");
MessageBox.Show(f.Name);
MessageBox.Show(f.FullName);
MessageBox.Show(f.Extension );
MessageBox.Show(f.DirectoryName);
Another possible reason is if you had some models defined in another file (not in a package) and haven't referenced that anywhere else.
For me, simply adding from .graph_model import *
to admin.py
(where graph_model.py
was the new file) fixed the problem.
Use the retainAll()
method of Set
:
Set<String> s1;
Set<String> s2;
s1.retainAll(s2); // s1 now contains only elements in both sets
If you want to preserve the sets, create a new set to hold the intersection:
Set<String> intersection = new HashSet<String>(s1); // use the copy constructor
intersection.retainAll(s2);
The javadoc of retainAll()
says it's exactly what you want:
Retains only the elements in this set that are contained in the specified collection (optional operation). In other words, removes from this set all of its elements that are not contained in the specified collection. If the specified collection is also a set, this operation effectively modifies this set so that its value is the intersection of the two sets.
Be sure the font is the same for all browsers. If it is the same font, then the problem has no solution using cross-browser CSS.
Because every browser has its own font rendering engine, they are all different. They can also differ in later versions, or across different OS's.
UPDATE: For those who do not understand the browser and OS font rendering differences, read this and this.
However, the difference is not even noticeable by most people, and users accept that. Forget pixel-perfect cross-browser design, unless you are:
UPDATE: I checked the example page. Tuning the kerning by text-rendering should help:
text-rendering: optimizeLegibility;
More references here:
font-smoothing
(as mentioned) and another part is text-rendering
. Tuning these properties may help as their default values are not the same across browsers.I sometimes use a parallel groovy class just to act as a bag of strings
The java class here
public class Test {
public static void main(String[] args) {
System.out.println(TestStrings.json1);
// consume .. parse json
}
}
And the coveted multiline strings here in TestStrings.groovy
class TestStrings {
public static String json1 = """
{
"name": "Fakeer's Json",
"age":100,
"messages":["msg 1","msg 2","msg 3"]
}""";
}
Of course this is for static strings only. If I have to insert variables in the text I will just change the entire file to groovy. Just maintain strong-typing practices and it can be pulled off.
Decode the JSON string using json_decode()
and then loop through it using a regular loop:
$arr = json_decode('[{"var1":"9","var2":"16","var3":"16"},{"var1":"8","var2":"15","var3":"15"}]');
foreach($arr as $item) { //foreach element in $arr
$uses = $item['var1']; //etc
}
Several ways to check if an object property exists.
const dog = { name: "Spot" }
if (dog.name) console.log("Yay 1"); // Prints.
if (dog.sex) console.log("Yay 2"); // Doesn't print.
if ("name" in dog) console.log("Yay 3"); // Prints.
if ("sex" in dog) console.log("Yay 4"); // Doesn't print.
if (dog.hasOwnProperty("name")) console.log("Yay 5"); // Prints.
if (dog.hasOwnProperty("sex")) console.log("Yay 6"); // Doesn't print, but prints undefined.
Besides LocalBroadcastManager , Event Bus and Messenger already answered in this question,we can use Pending Intent to communicate from service.
As mentioned here in my blog post
Communication between service and Activity can be done using PendingIntent.For that we can use createPendingResult().createPendingResult() creates a new PendingIntent object which you can hand to service to use and to send result data back to your activity inside onActivityResult(int, int, Intent) callback.Since a PendingIntent is Parcelable , and can therefore be put into an Intent extra,your activity can pass this PendingIntent to the service.The service, in turn, can call send() method on the PendingIntent to notify the activity via onActivityResult of an event.
Activity
public class PendingIntentActivity extends AppCompatActivity { @Override protected void onCreate(@Nullable Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); PendingIntent pendingResult = createPendingResult( 100, new Intent(), 0); Intent intent = new Intent(getApplicationContext(), PendingIntentService.class); intent.putExtra("pendingIntent", pendingResult); startService(intent); } @Override protected void onActivityResult(int requestCode, int resultCode, Intent data) { if (requestCode == 100 && resultCode==200) { Toast.makeText(this,data.getStringExtra("name"),Toast.LENGTH_LONG).show(); } super.onActivityResult(requestCode, resultCode, data); } }
Service
public class PendingIntentService extends Service { private static final String[] items= { "lorem", "ipsum", "dolor", "sit", "amet", "consectetuer", "adipiscing", "elit", "morbi", "vel", "ligula", "vitae", "arcu", "aliquet", "mollis", "etiam", "vel", "erat", "placerat", "ante", "porttitor", "sodales", "pellentesque", "augue", "purus" }; private PendingIntent data; @Override public void onCreate() { super.onCreate(); } @Override public int onStartCommand(Intent intent, int flags, int startId) { data = intent.getParcelableExtra("pendingIntent"); new LoadWordsThread().start(); return START_NOT_STICKY; } @Override public IBinder onBind(Intent intent) { return null; } @Override public void onDestroy() { super.onDestroy(); } class LoadWordsThread extends Thread { @Override public void run() { for (String item : items) { if (!isInterrupted()) { Intent result = new Intent(); result.putExtra("name", item); try { data.send(PendingIntentService.this,200,result); } catch (PendingIntent.CanceledException e) { e.printStackTrace(); } SystemClock.sleep(400); } } } } }
if(arrayName.length > index && arrayName[index] !== null) {
//arrayName[index] has a value
}
If you have IIS Express (with Visual Studio):
To enable the SSL within IIS Express, you have to just set “SSL Enabled = true” in the project properties window.
See the steps and pictures at this code project.
IIS Express will generate a certificate for you (you'll be prompted for it, etc.). Note that depending on configuration the site may still automatically start with the URL rather than the SSL URL. You can see the SSL URL - note the port number and replace it in your browser address bar, you should be able to get in and test.
From there you can right click on your project, click property pages, then start options and assign the start URL - put the new https with the new port (usually 44301 - notice the similarity to port 443) and your project will start correctly from then on.
Your immediate question of how to make it pull master, you need to do what it says. Specify the refspec to pull from in your branch config.
[branch "master"]
merge = refs/heads/master
Efficiency isn't going to matter for something like this in 99.999999% of situations. Do whatever is easier to read and or maintain.
In my apps I usually rely on classes to provide hiding and showing, for example .addClass('isHidden')/.removeClass('isHidden')
which would allow me to animate things with CSS3 if I wanted to. It provides more flexibility.
You could try
moment().add(1, 'M').subtract(1, 'day').format('DD-MM-YYYY')
The simple solution that worked for me as below, wrap the table with a div and change the line-height
, this line-height is taken as a ratio.
<div class="col-md-6" style="line-height: 0.5">_x000D_
<table class="table table-striped" >_x000D_
<thead>_x000D_
<tr>_x000D_
<th>Parameter</th>_x000D_
<th>Recorded Value</th>_x000D_
<th>Individual Score</th>_x000D_
</tr>_x000D_
</thead>_x000D_
<tbody>_x000D_
<tr>_x000D_
<td>Respiratory Rate</td>_x000D_
<td>Doe</td>_x000D_
<td>[email protected]</td>_x000D_
</tr>_x000D_
<tr>_x000D_
<td>Respiratory Effort</td>_x000D_
<td>Moe</td>_x000D_
<td>[email protected]</td>_x000D_
</tr>_x000D_
<tr>_x000D_
<td>Oxygon Saturation</td>_x000D_
<td>Dooley</td>_x000D_
<td>[email protected]</td>_x000D_
</tr>_x000D_
</tbody>_x000D_
</table>_x000D_
</div>
_x000D_
Try changing the value as it fits for you.
public boolean dispatchTouchEvent(MotionEvent ev){
boolean consume =false;
if(onInterceptTouchEvent(ev){
consume = onTouchEvent(ev);
}else{
consume = child.dispatchTouchEvent(ev);
}
}
Log in the Keycloak admin console website, select the realm and its client, then make sure all URIs of the client are prefixed with the protocol, that is, with http://
for example. An example would be http://localhost:8082/*
Another way to solve the issue, is to view the Keycloak server console output, locate the line stating the request was refused, copy from it the redirect_uri
displayed value and paste it in the * Valid Redirect URIs
field of the client in the Keycloak admin console website. The requested URI is then one of the acceptables.
There's better support for this now through conda-env
. You can, for example, now do:
name: sample_env
channels:
dependencies:
- requests
- bokeh>=0.10.0
- pip:
- "--editable=git+https://github.com/pythonforfacebook/facebook-sdk.git@8c0d34291aaafec00e02eaa71cc2a242790a0fcc#egg=facebook_sdk-master"
It's still calling pip under the covers, but you can now unify your conda and pip package specifications in a single environment.yml
file.
If you wanted to update your root environment with this file, you would need to save this to a file (for example, environment.yml
), then run the command: conda env update -f environment.yml
.
It's more likely that you would want to create a new environment:
conda env create -f environment.yml
(changed as supposed in the comments)
Simply return false from the onsubmit handler
<form onsubmit="return false;">
or if you want a handler in the middle
<script>
var submitHandler = function() {
// do stuff
return false;
}
</script>
<form onsubmit="return submitHandler()">
This is working for me on Centos
First: in file /etc/httpd/conf/httpd.conf
add
Listen 8079
after
Listen 80
This till your server to listen to the port 8079
Second: go to your virtual host for ex. /etc/httpd/conf.d/vhost.conf
and add this code below
<VirtualHost *:8079>
DocumentRoot /var/www/html/api_folder
ServerName example.com
ServerAlias www.example.com
ServerAdmin [email protected]
ErrorLog logs/www.example.com-error_log
CustomLog logs/www.example.com-access_log common
</VirtualHost>
This mean when you go to your www.example.com:8079
redirect to
/var/www/html/api_folder
But you need first to restart the service
sudo service httpd restart
I would say using tryParse, it'll return 'false' if the uint is to big for an int.
Don't forget that a uint can go much bigger than a int, as long as you going > 0
Update: The Python documentation for its regex module, re
, includes a section on simulating scanf, which I found more useful than any of the answers above.
The title
attribute works on most HTML tags and is widely supported by modern browsers.
A different approach, because in my case I needed to also check whether it contained certain words (like 'test' in this example), not characters alone:
input_string = 'abc test'
input_string_test = input_string
allowed_list = ['a', 'b', 'c', 'test', ' ']
for allowed_list_item in allowed_list:
input_string_test = input_string_test.replace(allowed_list_item, '')
if not input_string_test:
# test passed
So, the allowed strings (char or word) are cut from the input string. If the input string only contained strings that were allowed, it should leave an empty string and therefore should pass if not input_string
.
It's a common misconception that you can denormalise like this for performance. Use date_trunc('month', date)
for your queries and add an index expression for this if you find it running slow.
sqlplus
If you're using sqlplus
you may want to first set up a few parameters for nicer viewing if your columns are getting mangled (these variables should not persist after you exit your sqlplus
session ):
set colsep '|'
set linesize 167
set pagesize 30
set pagesize 1000
You can then use something like this to see all table names:
SELECT table_name, owner, tablespace_name FROM all_tables;
As @Justin Cave mentions, you can use this to show only tables that you own:
SELECT table_name FROM user_tables;
Keep in mind that some "tables" may actually be "views" so you can also try running something like:
SELECT view_name FROM all_views;
This should yield something that looks fairly acceptable like:
Look at this https://dev.to/risafj/beginner-s-guide-to-oauth-understanding-access-tokens-and-authorization-codes-2988
First you need an access_token:
$code = $_GET['code'];
$clientid = "xxxxxxx.apps.googleusercontent.com";
$clientsecret = "xxxxxxxxxxxxxxxxxxxxx";
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, "https://www.googleapis.com/oauth2/v4/token");
curl_setopt($ch, CURLOPT_POST, 1);
curl_setopt($ch, CURLOPT_POSTFIELDS, "client_id=".urlencode($clientid)."&client_secret=".urlencode($clientsecret)."&code=".urlencode($code)."&grant_type=authorization_code&redirect_uri=". urlencode("https://yourdomain.com"));
curl_setopt($ch, CURLOPT_HTTPHEADER, array('Content-Type: application/x-www-form-urlencoded'));
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
$server_output = curl_exec($ch);
curl_close ($ch);
$server_output = json_decode($server_output);
$access_token = $server_output->access_token;
$refresh_token = $server_output->refresh_token;
$expires_in = $server_output->expires_in;
Safe the Access Token and the Refresh Token and the expire_in, in a Database. The Access Token expires after $expires_in seconds. Than you need to grab a new Access Token (and safe it in the Database) with the following Request:
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, "https://www.googleapis.com/oauth2/v4/token");
curl_setopt($ch, CURLOPT_POST, 1);
curl_setopt($ch, CURLOPT_POSTFIELDS, "client_id=".urlencode($clientid)."&client_secret=".urlencode($clientsecret)."&refresh_token=".urlencode($refresh_token)."&grant_type=refresh_token");
curl_setopt($ch, CURLOPT_HTTPHEADER, array('Content-Type: application/x-www-form-urlencoded'));
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
$server_output = curl_exec($ch);
curl_close ($ch);
$server_output = json_decode($server_output);
$access_token = $server_output->access_token;
$expires_in = $server_output->expires_in;
Bear in Mind to add the redirect_uri Domain to your Domains in your Google Console: https://console.cloud.google.com/apis/credentials in the Tab "OAuth 2.0-Client-IDs". There you find also your Client-ID and Client-Secret.
Method Load
of DataTable
executes NextResult
on the DataReader
, so you shouldn't call NextResult
explicitly when using Load
, otherwise odd tables in the sequence would be omitted.
Here is a generic solution to load multiple tables using a DataReader
.
// your command initialization code here
// ...
DataSet ds = new DataSet();
DataTable t;
using (DbDataReader reader = command.ExecuteReader())
{
while (!reader.IsClosed)
{
t = new DataTable();
t.Load(rs);
ds.Tables.Add(t);
}
}
Travis-ci and Jenkins, while both are tools for continuous integration are very different.
Travis is a hosted service (free for open source) while you have to host, install and configure Jenkins.
Travis does not have jobs as in Jenkins. The commands to run to test the code are taken from a file named .travis.yml
which sits along your project code. This makes it easy to have different test code per branch since each branch can have its own version of the .travis.yml file.
You can have a similar feature with Jenkins if you use one of the following plugins:
.jervis.yml
file found at the root of project code. If .jervis.yml
does not exist, it will fall back to using .travis.yml
file instead.There are other hosted services you might also consider for continuous integration (non exhaustive list):
You might want to stay with Jenkins because you are familiar with it or don't want to depend on 3rd party for your continuous integration system. Else I would drop Jenkins and go with one of the free hosted CI services as they save you a lot of trouble (host, install, configure, prepare jobs)
Depending on where your code repository is hosted I would make the following choices:
To setup Travis-CI on a github project, all you have to do is:
The features you get are:
To clarify some of the logical equivalence solutions.
An if statement
if (a) then b
is logically equivalent to
(!a || b)
It's the first line on the Logical equivalences involving conditional statements section of the Logical equivalence wikipedia article.
To include the else, all you would do is add another conditional
if(a) then b;
if(!a) then c;
which is logically equivalent to (!a || b) && (a || c)
So using the OP as an example:
IF IsNumeric(@OrderNumber) = 1
OrderNumber = @OrderNumber
ELSE
OrderNumber LIKE '%' + @OrderNumber + '%'
the logical equivalent would be:
(IsNumeric(@OrderNumber) <> 1 OR OrderNumber = @OrderNumber)
AND (IsNumeric(@OrderNumber) = 1 OR OrderNumber LIKE '%' + @OrderNumber + '%' )
// usage: log('inside coolFunc',this,arguments);
// http://paulirish.com/2009/log-a-lightweight-wrapper-for-consolelog/
window.log = function(){
log.history = log.history || []; // store logs to an array for reference
log.history.push(arguments);
if(this.console){
console.log( Array.prototype.slice.call(arguments) );
}
};
Using window.log will allow you to perform the same action as console.log, but it checks if the browser you are using has the ability to use console.log first, so as not to error out for compatibility reasons (IE 6, etc.).
My steps:
In addition to making sure "Build Active Architectures" was set to YES as mentioned in previous answers, this was what had done it for me.
This is the approach I used my self with conditional formatting. and I post it here because I think this is clean way.
$"{time.Days:#0:;;\\}{time.Hours:#0:;;\\}{time.Minutes:00:}{time.Seconds:00}"
example of outputs:
00:00
(minimum)
1:43:04
(when we have hours)
15:03:01
(when hours are more than 1 digit)
2:4:22:04
(when we have days.)
The formatting is easy. time.Days:#0:;;\\
the format before ;;
is for when value is positive. negative values are ignored. and for zero values we have;;\\
in order to hide it in formatted string. note that the escaped backslash is necessary otherwise it will not format correctly.
It is better to use API Key in header, not in URL.
URLs are saved in browser's history if it is tried from browser. It is very rare scenario. But problem comes when the backend server logs all URLs. It might expose the API key.
In two ways, you can use API Key in header
Basic Authorization:
Example from stripe:
curl https://api.stripe.com/v1/charges -u sk_test_BQokikJOvBiI2HlWgH4olfQ2:
curl uses the -u flag to pass basic auth credentials (adding a colon after your API key will prevent it from asking you for a password).
Custom Header
curl -H "X-API-KEY: 6fa741de1bdd1d91830ba" https://api.mydomain.com/v1/users
CTRL + SCR LK (Scroll Lock) worked for me.
Set the CSS position: relative;
on the box. This causes all absolute positions of objects inside to be relative to the corners of that box. Then set the following CSS on the "Bet 5 days ago" line:
position: absolute;
bottom: 0;
right: 0;
If you need to space the text farther away from the edge, you could change 0
to 2px
or similar.
You need to use AND statement in your formula
=IF(AND(IF(NOT(ISBLANK(Q2));TRUE;FALSE);Q2<=R2);"1";"0")
And if both conditions are met, return 1.
You could also add more conditions in your AND statement.
The other answers are valid, but the Official Symfony Best Practices guide suggests using the web/
folder to store all assets, instead of different bundles.
Scattering your web assets across tens of different bundles makes it more difficult to manage them. Your designers' lives will be much easier if all the application assets are in one location.
Templates also benefit from centralizing your assets, because the links are much more concise[...]
I'd add to this by suggesting that you only put micro-assets within micro-bundles, such as a few lines of styles only required for a button in a button bundle, for example.
For HTML5, you can use the 'hidden' attribute.
<option hidden>Hidden option</option>
It is not supported by IE < 11. But if you need only to hide a few elements, maybe it would be better to just set the hidden attribute in combination with disabled in comparison to adding/removing elements or doing not semantically correct constructions.
<select> _x000D_
<option>Option1</option>_x000D_
<option>Option2</option>_x000D_
<option hidden>Hidden Option</option>_x000D_
</select>
_x000D_
First, try omitting the quotes from 12 and 24. Worth a shot.
Second, it's better to do this in CSS. See also http://www.w3schools.com/css/css_font.asp . Here is an inline style for a table tag:
<table style='font-family:"Courier New", Courier, monospace; font-size:80%' ...>...</table>
Better still, use an external style sheet or a style tag near the top of your HTML document. See also http://www.w3schools.com/css/css_howto.asp .
Typing file myfile.tex
in a terminal can sometimes tell you the encoding and type of file using a series of algorithms and magic numbers. It's fairly useful but don't rely on it providing concrete or reliable information.
A Localizable.strings
file (found in localised Mac OS X applications) is typically reported to be a UTF-16 C source file.
In Debian NGINX environment i did the following.
Goto /etc/mysql/mysql.conf.d
edit mysqld.cnf
if you find log-error = /var/log/mysql/error.log
add the following 2 lines bellow it.
general_log_file = /var/log/mysql/mysql.log
general_log = 1
To see the logs goto /var/log/mysql
and tail -f mysql.log
Remember to comment these lines out once you are done with debugging if you are in production environment delete mysql.log
as this log file will grow quickly and can be huge.
In your case, I would use YAML .
That is an elegant and professional standard for dealing with multiple parameters. The values are loaded from a separate file. You can see some info in this link:
https://keleshev.com/yaml-quick-introduction
But it is easier to Google it, as it is a standard, there are hundreds of info about it, you can find what best fits to your understanding. ;)
Best regards.
Check out Fancybox. If you need the video to autoplay this example site was helpful!
Sometimes it might be something pretty simple like missing your resource file in test-classses folder due to some cleanups.
convert timestamps to date string in js
moment().format('YYYY-MM-DD hh:mm:ss');
// "2020-01-10 11:55:43"
moment(1578478211000).format('YYYY-MM-DD hh:mm:ss');
// "2020-01-08 06:10:11"
AD registers Service Location (SRV) resource records in its DNS server which you can query to get the port and the hostname of the responsible LDAP server in your domain.
Just try this on the command-line:
C:\> nslookup
> set types=all
> _ldap._tcp.<<your.AD.domain>>
_ldap._tcp.<<your.AD.domain>> SRV service location:
priority = 0
weight = 100
port = 389
svr hostname = <<ldap.hostname>>.<<your.AD.domain>>
(provided that your nameserver is the AD nameserver which should be the case for the AD to function properly)
Please see Active Directory SRV Records and Windows 2000 DNS white paper for more information.
Here is an example to play with Inline SVG for a list bullet (2020 Browsers)
list-style-image: url("data:image/svg+xml,
<svg width='50' height='50'
xmlns='http://www.w3.org/2000/svg'
viewBox='0 0 72 72'>
<rect width='100%' height='100%' fill='pink'/>
<path d='M70 42a3 3 90 0 1 3 3a3 3 90 0 1-3 3h-12l-3 3l-6 15l-3
l-6-3v-21v-3l15-15a3 3 90 0 1 0 0c3 0 3 0 3 3l-6 12h30
m-54 24v-24h9v24z'/></svg>")
width
& height
to set the sizeM70 42
to position the handrect
li{
font-size:2em;
list-style-image: url("data:image/svg+xml,<svg width='3em' height='3em' xmlns='http://www.w3.org/2000/svg' viewBox='0 0 72 72'><rect width='100%' height='100%' fill='pink'/><path d='M70 42a3 3 90 0 1 3 3a3 3 90 0 1-3 3h-12l-3 3l-6 15l-3 3h-12l-6-3v-21v-3l15-15a3 3 90 0 1 0 0c3 0 3 0 3 3l-6 12h30m-54 24v-24h9v24z'/></svg>");
}
span{
display:inline-block;
vertical-align:top;
margin-top:-10px;
margin-left:-5px;
}
_x000D_
<ul>
<li><span>Apples</span></li>
<li><span>Bananas</span></li>
<li>Oranges</li>
</ul>
_x000D_
Note: tcpdump requires root privileges, so you'll have to root your phone if not done already. Here's an ARM binary of tcpdump (this works for my Samsung Captivate). If you prefer to build your own binary, instructions are here (yes, you'd likely need to cross compile).
Also, check out Shark For Root (an Android packet capture tool based on tcpdump).
I don't believe tcpdump can monitor traffic by specific process ID. The strace method that Chris Stratton refers to seems like more effort than its worth. It would be simpler to monitor specific IPs and ports used by the target process. If that info isn't known, capture all traffic during a period of process activity and then sift through the resulting pcap with Wireshark.
I Faced the same issue. For me it was just to unckeck the option :
Tablix Properties=> Page Break Option => Keep together on one page if possible
Of SSRS Report. It was trying to put all records on the same page instead of creating many pages.
double a,b;
Console.WriteLine("istenen sayiyi sonuna .00 koyarak yaz");
try
{
a = Convert.ToDouble(Console.ReadLine());
b = a * Math.PI;
Console.WriteLine("Sonuç " + b);
}
catch (Exception)
{
Console.WriteLine("dönüstürme hatasi");
throw;
}
Limit - 30 symbols. Username must contains only letters, numbers, periods and underscores.
Wrapping a <a>
around won't work (unless you set the <div>
to display:inline-block;
or display:block;
to the <a>
) because the div is s a block-level element and the <a>
is not.
<a href="http://www.example.com" style="display:block;">
<div>
content
</div>
</a>
<a href="http://www.example.com">
<div style="display:inline-block;">
content
</div>
</a>
<a href="http://www.example.com">
<span>
content
</span >
</a>
<a href="http://www.example.com">
content
</a>
But maybe you should skip the <div>
and choose a <span>
instead, or just the plain <a>
. And if you really want to make the div clickable, you could attach a javascript redirect with a onclick handler, somethign like:
document.getElementById("myId").setAttribute('onclick', 'location.href = "url"');
but I would recommend against that.
You can define array size at runtime.
This will allow you to do whatever to dynamically compute the array's size. But, once defined the size is immutable.
Array a = Array.CreateInstance(typeof(string), 5);
In the Project’s Settings, add /FORCE:MULTIPLE
to the Linker’s Command Line options.
From MSDN: "Use /FORCE:MULTIPLE to create an output file whether or not LINK finds more than one definition for a symbol."
Just to keep this up to date:
The current version of SQLDeveloper has an export tool (Tools > Database Export
) that will allow you to dump a schema to a file, with filters for object types, object names, table data etc.
It's a fair amount easier to set-up and use than exp
and imp
if you're used to working in a GUI environment, but not as versatile if you need to use it for scripting anything.
Use: window.location.href
.
As noted above, document.URL
doesn't update when updating window.location
. See MDN.
for me this error cause of different stuff. i have two ajax call in my page. first one for save comment and another one for save like. in my routes.php i had this:
Route::post('posts/show','PostController@save_comment');
Route::post('posts/show','PostController@save_like');
and i got 500 internal server error for my save like ajax call. so i change second line http request type to PUT and error goes away. you can use PATCH too. maybe it helps.
I agree with thomaux that the initialization type checking error is a TypeScript bug. However, I still wanted to find a way to declare and initialize a Dictionary in a single statement with correct type checking. This implementation is longer, however it adds additional functionality such as a containsKey(key: string)
and remove(key: string)
method. I suspect that this could be simplified once generics are available in the 0.9 release.
First we declare the base Dictionary class and Interface. The interface is required for the indexer because classes cannot implement them.
interface IDictionary {
add(key: string, value: any): void;
remove(key: string): void;
containsKey(key: string): bool;
keys(): string[];
values(): any[];
}
class Dictionary {
_keys: string[] = new string[];
_values: any[] = new any[];
constructor(init: { key: string; value: any; }[]) {
for (var x = 0; x < init.length; x++) {
this[init[x].key] = init[x].value;
this._keys.push(init[x].key);
this._values.push(init[x].value);
}
}
add(key: string, value: any) {
this[key] = value;
this._keys.push(key);
this._values.push(value);
}
remove(key: string) {
var index = this._keys.indexOf(key, 0);
this._keys.splice(index, 1);
this._values.splice(index, 1);
delete this[key];
}
keys(): string[] {
return this._keys;
}
values(): any[] {
return this._values;
}
containsKey(key: string) {
if (typeof this[key] === "undefined") {
return false;
}
return true;
}
toLookup(): IDictionary {
return this;
}
}
Now we declare the Person specific type and Dictionary/Dictionary interface. In the PersonDictionary note how we override values()
and toLookup()
to return the correct types.
interface IPerson {
firstName: string;
lastName: string;
}
interface IPersonDictionary extends IDictionary {
[index: string]: IPerson;
values(): IPerson[];
}
class PersonDictionary extends Dictionary {
constructor(init: { key: string; value: IPerson; }[]) {
super(init);
}
values(): IPerson[]{
return this._values;
}
toLookup(): IPersonDictionary {
return this;
}
}
And here is a simple initialization and usage example:
var persons = new PersonDictionary([
{ key: "p1", value: { firstName: "F1", lastName: "L2" } },
{ key: "p2", value: { firstName: "F2", lastName: "L2" } },
{ key: "p3", value: { firstName: "F3", lastName: "L3" } }
]).toLookup();
alert(persons["p1"].firstName + " " + persons["p1"].lastName);
// alert: F1 L2
persons.remove("p2");
if (!persons.containsKey("p2")) {
alert("Key no longer exists");
// alert: Key no longer exists
}
alert(persons.keys().join(", "));
// alert: p1, p3
There is a Patterns
class in package android.util
which is beneficial here. Below is the method I always use for validating email and many other stuffs
private boolean isEmailValid(String email) {
return !TextUtils.isEmpty(email) && Patterns.EMAIL_ADDRESS.matcher(email).matches();
}
You can use at class level with following syntax
@Entity
@Table(uniqueConstraints={@UniqueConstraint(columnNames={"username"})})
public class SomeEntity {
@Column(name = "username")
public String username;
}
Here is my approach to making a structure array from a pandas DataFrame.
Create the data frame
import pandas as pd
import numpy as np
import six
NaN = float('nan')
ID = [1, 2, 3, 4, 5, 6, 7]
A = [NaN, NaN, NaN, 0.1, 0.1, 0.1, 0.1]
B = [0.2, NaN, 0.2, 0.2, 0.2, NaN, NaN]
C = [NaN, 0.5, 0.5, NaN, 0.5, 0.5, NaN]
columns = {'A':A, 'B':B, 'C':C}
df = pd.DataFrame(columns, index=ID)
df.index.name = 'ID'
print(df)
A B C
ID
1 NaN 0.2 NaN
2 NaN NaN 0.5
3 NaN 0.2 0.5
4 0.1 0.2 NaN
5 0.1 0.2 0.5
6 0.1 NaN 0.5
7 0.1 NaN NaN
Define function to make a numpy structure array (not a record array) from a pandas DataFrame.
def df_to_sarray(df):
"""
Convert a pandas DataFrame object to a numpy structured array.
This is functionally equivalent to but more efficient than
np.array(df.to_array())
:param df: the data frame to convert
:return: a numpy structured array representation of df
"""
v = df.values
cols = df.columns
if six.PY2: # python 2 needs .encode() but 3 does not
types = [(cols[i].encode(), df[k].dtype.type) for (i, k) in enumerate(cols)]
else:
types = [(cols[i], df[k].dtype.type) for (i, k) in enumerate(cols)]
dtype = np.dtype(types)
z = np.zeros(v.shape[0], dtype)
for (i, k) in enumerate(z.dtype.names):
z[k] = v[:, i]
return z
Use reset_index
to make a new data frame that includes the index as part of its data. Convert that data frame to a structure array.
sa = df_to_sarray(df.reset_index())
sa
array([(1L, nan, 0.2, nan), (2L, nan, nan, 0.5), (3L, nan, 0.2, 0.5),
(4L, 0.1, 0.2, nan), (5L, 0.1, 0.2, 0.5), (6L, 0.1, nan, 0.5),
(7L, 0.1, nan, nan)],
dtype=[('ID', '<i8'), ('A', '<f8'), ('B', '<f8'), ('C', '<f8')])
EDIT: Updated df_to_sarray to avoid error calling .encode() with python 3. Thanks to Joseph Garvin and halcyon for their comment and solution.
If there are any more questions, please let me know.
Thanks.
Just tell composer to use source if available:
composer update --prefer-source
Or:
composer install --prefer-source
Then you will get packages as cloned repositories instead of extracted tarballs, so you can make some changes and commit them back. Of course, assuming you have write/push permissions to the repository and Composer knows about project's repository.
Disclaimer: I think I may answered a little bit different question, but this was what I was looking for when I found this question, so I hope it will be useful to others as well.
If Composer does not know, where the project's repository is, or the project does not have proper composer.json, situation is a bit more complicated, but others answered such scenarios already.
In response to jt. You are correct, the HTML with just the password is susceptible to the Man in the middle attack. However, you can seed it with a GUID from the server ...
$.post(
'includes/login.php',
{ user: username, pass: $.md5(password + GUID) },
onLogin,
'json' );
This would defeat the Man-In-The middle ... in that the server would generate a new GUID for each attempt.
I had the same issue with firefox, when I searched for a solution I didn't find anything, but then I tried to load the script from a cdn, it worked properly, so I think you should try loading it from a cdn link, I mean if you are trying to load a script that you havn't created. because in my case, when tried to load a script that is mine, it worked and imported successfully, for now I don't know why, but I think there is something in the scripts from network, so just try cdn, you won't lose anything.
I wish it help you.
One solution I've favoured is to do this with sprintf directly into the std::string buffer, after making said buffer big enough:
#include <string>
#include <iostream>
using namespace std;
string l_output;
l_output.resize(100);
for (int i = 0; i < 1000; ++i)
{
memset (&l_output[0], 0, 100);
sprintf (&l_output[0], "\r%i\0", i);
cout << l_output;
cout.flush();
}
So, create the std::string, resize it, access its buffer directly...
git pull origin master
will pull changes from the origin
remote, master
branch and merge them to the local checked-out branch.
git pull origin/master
will pull changes from the locally stored branch origin/master
and merge that to the local checked-out branch. The origin/master
branch is essentially a "cached copy" of what was last pulled from origin
, which is why it's called a remote branch in git parlance. This might be somewhat confusing.
You can see what branches are available with git branch
and git branch -r
to see the "remote branches".
Any of the above.
There are many, many better things to pontificate. Such as what colour bark suits a tree best, I think vague brown with tinges of dulcet moss.
Regular expression for matching everything after "net" and before ".php":
$pattern = "net([a-zA-Z0-9_]*)\.php";
In the above regular expression, you can find the matching group of characters enclosed by "()" to be what you are looking for.
Hope it's useful.
I got this error generating a data frame consisting of timestamps and data:
df = pd.DataFrame({'data':value}, index=pd.DatetimeIndex(timestamp))
Adding the suggested solution works for me:
df = pd.DataFrame({'data':value}, index=pd.DatetimeIndex(timestamp), dtype=float))
Thanks Chang She!
Example:
data
2005-01-01 00:10:00 7.53
2005-01-01 00:20:00 7.54
2005-01-01 00:30:00 7.62
2005-01-01 00:40:00 7.68
2005-01-01 00:50:00 7.81
2005-01-01 01:00:00 7.95
2005-01-01 01:10:00 7.96
2005-01-01 01:20:00 7.95
2005-01-01 01:30:00 7.98
2005-01-01 01:40:00 8.06
2005-01-01 01:50:00 8.04
2005-01-01 02:00:00 8.06
2005-01-01 02:10:00 8.12
2005-01-01 02:20:00 8.12
2005-01-01 02:30:00 8.25
2005-01-01 02:40:00 8.27
2005-01-01 02:50:00 8.17
2005-01-01 03:00:00 8.21
2005-01-01 03:10:00 8.29
2005-01-01 03:20:00 8.31
2005-01-01 03:30:00 8.25
2005-01-01 03:40:00 8.19
2005-01-01 03:50:00 8.17
2005-01-01 04:00:00 8.18
data
2005-01-01 00:00:00 7.636000
2005-01-01 01:00:00 7.990000
2005-01-01 02:00:00 8.165000
2005-01-01 03:00:00 8.236667
2005-01-01 04:00:00 8.180000
Your command is completely incorrect. The output format is not rawvideo
and you don't need the bitstream filter h264_mp4toannexb
which is used when you want to convert the h264
contained in an mp4
to the Annex B
format used by MPEG-TS
for example. What you want to use instead is the aac_adtstoasc
for the AAC
streams.
ffmpeg -i http://.../playlist.m3u8 -c copy -bsf:a aac_adtstoasc output.mp4
I know the answer by @Pascal Thivent has solved the issue. I would like to add a bit more to his answer to others who might be surfing this thread.
If you are like me in the initial days of learning and wrapping your head around the concept of using the @OneToMany
annotation with the 'mappedBy
' property, it also means that the other side holding the @ManyToOne
annotation with the @JoinColumn
is the 'owner' of this bi-directional relationship.
Also, mappedBy
takes in the instance name (mCustomer
in this example) of the Class variable as an input and not the Class-Type (ex:Customer) or the entity name(Ex:customer).
BONUS :
Also, look into the orphanRemoval
property of @OneToMany
annotation. If it is set to true, then if a parent is deleted in a bi-directional relationship, Hibernate automatically deletes it's children.
ASCII has 128 code points, 0 through 127. It can fit in a single 8-bit byte, the values 128 through 255 tended to be used for other characters. With incompatible choices, causing the code page disaster. Text encoded in one code page cannot be read correctly by a program that assumes or guessed at another code page.
Unicode came about to solve this disaster. Version 1 started out with 65536 code points, commonly encoded in 16 bits. Later extended in version 2 to 1.1 million code points. The current version is 6.3, using 110,187 of the available 1.1 million code points. That doesn't fit in 16 bits anymore.
Encoding in 16-bits was common when v2 came around, used by Microsoft and Apple operating systems for example. And language runtimes like Java. The v2 spec came up with a way to map those 1.1 million code points into 16-bits. An encoding called UTF-16, a variable length encoding where one code point can take either 2 or 4 bytes. The original v1 code points take 2 bytes, added ones take 4.
Another variable length encoding that's very common, used in *nix operating systems and tools is UTF-8, a code point can take between 1 and 4 bytes, the original ASCII codes take 1 byte the rest take more. The only non-variable length encoding is UTF-32, takes 4 bytes for a code point. Not often used since it is pretty wasteful. There are other ones, like UTF-1 and UTF-7, widely ignored.
An issue with the UTF-16/32 encodings is that the order of the bytes will depend on the endian-ness of the machine that created the text stream. So add to the mix UTF-16BE, UTF-16LE, UTF-32BE and UTF-32LE.
Having these different encoding choices brings back the code page disaster to some degree, along with heated debates among programmers which UTF choice is "best". Their association with operating system defaults pretty much draws the lines. One counter-measure is the definition of a BOM, the Byte Order Mark, a special codepoint (U+FEFF, zero width space) at the beginning of a text stream that indicates how the rest of the stream is encoded. It indicates both the UTF encoding and the endianess and is neutral to a text rendering engine. Unfortunately it is optional and many programmers claim their right to omit it so accidents are still pretty common.
Do you know the crash error from Windows explorer?! "Pure virtual function call ..."
Same problem ...
class AbstractClass
{
public:
AbstractClass( ){
//if you call pureVitualFunction I will crash...
}
virtual void pureVitualFunction() = 0;
};
Because there is no implemetation for the function pureVitualFunction() and the function is called in the constructor the program will crash.
If you are using the Eclipse ADT plugin for your development, open AndroidManifest.xml
in the Android Manifest Editor (should be the default action for opening AndroidManifest.xml
from the project files list).
Afterwards, select the Permissions
tab along the bottom of the editor (Manifest - Application - Permissions - Instrumentation - AndroidManifest.xml
), then click Add...
a Uses Permission
and select the desired permission from the dropdown on the right, or just copy-paste in the necessary one (such as the android.permission.INTERNET
permission you required).
http://docs.python.org/howto/unicode.html#the-unicode-type
str = unicode(str, errors='replace')
or
str = unicode(str, errors='ignore')
Note: This will strip out (ignore) the characters in question returning the string without them.
For me this is ideal case since I'm using it as protection against non-ASCII input which is not allowed by my application.
Alternatively: Use the open method from the codecs
module to read in the file:
import codecs
with codecs.open(file_name, 'r', encoding='utf-8',
errors='ignore') as fdata:
When I tried to install a new ionic app, I got the same error as follows, I tried many sources and found the mistake made in User Environment and System Environment unnecessarily included the PROXY value. I removed the ```user variables http://host:port PROXY
system Variables http_proxy http://username:password@host:port ```` and now it is working fine without trouble.
[ERROR] Network connectivity error occurred, are you offline?
If you are behind a firewall and need to configure proxy settings, see: https://ion.link/cli-proxy-docs
Error: getaddrinfo ENOTFOUND host host:80
Here's a nifty little extension method that you can use if you want to check guid uniqueness in many places in your code.
internal static class GuidExt
{
public static bool IsUnique(this Guid guid)
{
while (guid != Guid.NewGuid())
{ }
return false;
}
}
To call it, simply call Guid.IsUnique whenever you generate a new guid...
Guid g = Guid.NewGuid();
if (!g.IsUnique())
{
throw new GuidIsNotUniqueException();
}
...heck, I'd even recommend calling it twice to make sure it got it right in the first round.
By combining existing answers (this one and this one) the proper type safe way to add an ArrayList
to a JComboBox
is the following:
private DefaultComboBoxModel<YourClass> getComboBoxModel(List<YourClass> yourClassList)
{
YourClass[] comboBoxModel = yourClassList.toArray(new YourClass[0]);
return new DefaultComboBoxModel<>(comboBoxModel);
}
In your GUI
code you set the entire list into your JComboBox
as follows:
DefaultComboBoxModel<YourClass> comboBoxModel = getComboBoxModel(yourClassList);
comboBox.setModel(comboBoxModel);
You have to inject the value of current
into an Attribute Equals selector:
$("ul").find(`[data-slide='${current}']`)
For older JavaScript environments (ES5 and earlier):
$("ul").find("[data-slide='" + current + "']");
In my case, the issue was unrelated to CORS since I was issuing a jQuery POST to the same web server. The data was JSON but I had omitted the dataType: 'json' parameter.
I did not have (nor did I add) a contentType parameter as shown in David Lopes' answer above.
There's a duplicate question with a simpler answer using cut:
svn status | grep '\!' | cut -d\ -f2-
-d
specifies the delimeter (space), -f
specifies the list of columns (all starting with the 2nd)
You can use JsonNode
for a structured tree representation of your JSON string. It's part of the rock solid jackson library which is omnipresent.
ObjectMapper mapper = new ObjectMapper();
JsonNode yourObj = mapper.readTree("{\"k\":\"v\"}");
A simple one:
import string
import random
character = string.lowercase + string.uppercase + string.digits + string.punctuation
char_len = len(character)
# you can specify your password length here
pass_len = random.randint(10,20)
password = ''
for x in range(pass_len):
password = password + character[random.randint(0,char_len-1)]
print password
The guidance on not using a thread abort is controversial. I think there is still a place for it but in exceptional circumstance. However you should always attempt to design around it and see it as a last resort.
Example;
You have a simple windows form application that connects to a blocking synchronous web service. Within which it executes a function on the web service within a Parallel loop.
CancellationTokenSource cts = new CancellationTokenSource();
ParallelOptions po = new ParallelOptions();
po.CancellationToken = cts.Token;
po.MaxDegreeOfParallelism = System.Environment.ProcessorCount;
Parallel.ForEach(iListOfItems, po, (item, loopState) =>
{
Thread.Sleep(120000); // pretend web service call
});
Say in this example, the blocking call takes 2 mins to complete. Now I set my MaxDegreeOfParallelism to say ProcessorCount. iListOfItems has 1000 items within it to process.
The user clicks the process button and the loop commences, we have 'up-to' 20 threads executing against 1000 items in the iListOfItems collection. Each iteration executes on its own thread. Each thread will utilise a foreground thread when created by Parallel.ForEach. This means regardless of the main application shutdown, the app domain will be kept alive until all threads have finished.
However the user needs to close the application for some reason, say they close the form. These 20 threads will continue to execute until all 1000 items are processed. This is not ideal in this scenario, as the application will not exit as the user expects and will continue to run behind the scenes, as can be seen by taking a look in task manger.
Say the user tries to rebuild the app again (VS 2010), it reports the exe is locked, then they would have to go into task manager to kill it or just wait until all 1000 items are processed.
I would not blame you for saying, but of course! I should be cancelling these threads using the CancellationTokenSource object and calling Cancel ... but there are some problems with this as of .net 4.0. Firstly this is still never going to result in a thread abort which would offer up an abort exception followed by thread termination, so the app domain will instead need to wait for the threads to finish normally, and this means waiting for the last blocking call, which would be the very last running iteration (thread) that ultimately gets to call po.CancellationToken.ThrowIfCancellationRequested
.
In the example this would mean the app domain could still stay alive for up to 2 mins, even though the form has been closed and cancel called.
Note that Calling Cancel on CancellationTokenSource does not throw an exception on the processing thread(s), which would indeed act to interrupt the blocking call similar to a thread abort and stop the execution. An exception is cached ready for when all the other threads (concurrent iterations) eventually finish and return, the exception is thrown in the initiating thread (where the loop is declared).
I chose not to use the Cancel option on a CancellationTokenSource object. This is wasteful and arguably violates the well known anti-patten of controlling the flow of the code by Exceptions.
Instead, it is arguably 'better' to implement a simple thread safe property i.e. Bool stopExecuting. Then within the loop, check the value of stopExecuting and if the value is set to true by the external influence, we can take an alternate path to close down gracefully. Since we should not call cancel, this precludes checking CancellationTokenSource.IsCancellationRequested which would otherwise be another option.
Something like the following if condition would be appropriate within the loop;
if (loopState.ShouldExitCurrentIteration || loopState.IsExceptional || stopExecuting) {loopState.Stop(); return;}
The iteration will now exit in a 'controlled' manner as well as terminating further iterations, but as I said, this does little for our issue of having to wait on the long running and blocking call(s) that are made within each iteration (parallel loop thread), since these have to complete before each thread can get to the option of checking if it should stop.
In summary, as the user closes the form, the 20 threads will be signaled to stop via stopExecuting, but they will only stop when they have finished executing their long running function call.
We can't do anything about the fact that the application domain will always stay alive and only be released when all foreground threads have completed. And this means there will be a delay associated with waiting for any blocking calls made within the loop to complete.
Only a true thread abort can interrupt the blocking call, and you must mitigate leaving the system in a unstable/undefined state the best you can in the aborted thread's exception handler which goes without question. Whether that's appropriate is a matter for the programmer to decide, based on what resource handles they chose to maintain and how easy it is to close them in a thread's finally block. You could register with a token to terminate on cancel as a semi workaround i.e.
CancellationTokenSource cts = new CancellationTokenSource();
ParallelOptions po = new ParallelOptions();
po.CancellationToken = cts.Token;
po.MaxDegreeOfParallelism = System.Environment.ProcessorCount;
Parallel.ForEach(iListOfItems, po, (item, loopState) =>
{
using (cts.Token.Register(Thread.CurrentThread.Abort))
{
Try
{
Thread.Sleep(120000); // pretend web service call
}
Catch(ThreadAbortException ex)
{
// log etc.
}
Finally
{
// clean up here
}
}
});
but this will still result in an exception in the declaring thread.
All things considered, interrupt blocking calls using the parallel.loop constructs could have been a method on the options, avoiding the use of more obscure parts of the library. But why there is no option to cancel and avoid throwing an exception in the declaring method strikes me as a possible oversight.
Assign the enddate to some date variable because AddDays
method returns new Datetime as the result..
Datetime somedate=endDate.AddDays(2);
Oauth Process flow is complex and there is always a room for one error or another. My suggestion will be to always use the boilerplate code and a set of libraries for OAuth authentication flow.It will make your life easier.
Here is the link for the set of libraries.OAuth Libraries for .Net
According to the documentation:
Thrown to indicate that the code has attempted to cast an Object
to a subclass
of which it is not an instance. For example, the following code generates a ClassCastException
:
Object x = new Integer(0);
System.out.println((String)x);
I had the same issue except removing and adding the reference back did not fix the error, so I changed .Net version from 4.5
to 4.5.1
.
To achieve this go to your web.config
file and change the following lines
<compilation debug="true" targetFramework="4.5" />
<httpRuntime targetFramework="4.5" />
to this
<compilation debug="true" targetFramework="4.5.1" />
<httpRuntime targetFramework="4.5.1" />
and rebuild.
You don't need an index match formula. You can use this array formula. You have to press CTL + SHIFT + ENTER after you enter the formula.
=MAX(IF((A1:A6=A10)*(B1:B6=B10),C1:F6))
SNAPSHOT
Per the documentation you posted, and this blog post - there's no difference when declaring the primitive between an unsigned int/long and a signed one. The "new support" is the addition of the static methods in the Integer and Long classes, e.g. Integer.divideUnsigned. If you're not using those methods, your "unsigned" long above 2^63-1 is just a plain old long with a negative value.
From a quick skim, it doesn't look like there's a way to declare integer constants in the range outside of +/- 2^31-1, or +/- 2^63-1 for longs. You would have to manually compute the negative value corresponding to your out-of-range positive value.
You could use ng-init in an outer div:
<div ng-init="param='value';">
<div ng-controller="BasketController" >
<label>param: {{value}}</label>
</div>
</div>
The parameter will then be available in your controller's scope:
function BasketController($scope) {
console.log($scope.param);
}
The easiest thing to do is to run your program directly instead of through the IDE. Open a command prompt (Start->Run->Cmd.exe
->Enter), cd
to the folder where your project is, and run the program from there. That way, when the program exits, the prompt window sticks around and you can read all of the output.
Alternatively, you can also re-direct standard output to a file, but that's probably not what you are going for here.
What I do is use Object.keys() to return a list of all the keys and then get the length of that
Object.keys(dictionary).length
This did the job for me including active main dropdowns and the active childrens (thanks to 422):
$(document).ready(function () {
var url = window.location;
// Will only work if string in href matches with location
$('ul.nav a[href="' + url + '"]').parent().addClass('active');
// Will also work for relative and absolute hrefs
$('ul.nav a').filter(function () {
return this.href == url;
}).parent().addClass('active').parent().parent().addClass('active');
});
the 3 additive colors are in fact red, green, and blue. printers use cmyk (cyan, magenta, yellow, and black).
and as http://en.wikipedia.org/wiki/Additive_color explains: if you use RYB as your primary colors, how do you make green? since yellow is made from equal amounts of red and green.
I would do this slightly different by applying both the UNPIVOT
and the PIVOT
functions to get the final result. The unpivot takes the values from both the totalcount
and totalamount
columns and places them into one column with multiple rows. You can then pivot on those results.:
select chardate,
Australia_totalcount as [Australia # of Transactions],
Australia_totalamount as [Australia Total $ Amount],
Austria_totalcount as [Austria # of Transactions],
Austria_totalamount as [Austria Total $ Amount]
from
(
select
numericmonth,
chardate,
country +'_'+col col,
value
from
(
select numericmonth,
country,
chardate,
cast(totalcount as numeric(10, 2)) totalcount,
cast(totalamount as numeric(10, 2)) totalamount
from mytransactions
) src
unpivot
(
value
for col in (totalcount, totalamount)
) unpiv
) s
pivot
(
sum(value)
for col in (Australia_totalcount, Australia_totalamount,
Austria_totalcount, Austria_totalamount)
) piv
order by numericmonth
See SQL Fiddle with Demo.
If you have an unknown number of country
names, then you can use dynamic SQL:
DECLARE @cols AS NVARCHAR(MAX),
@colsName AS NVARCHAR(MAX),
@query AS NVARCHAR(MAX)
select @cols = STUFF((SELECT distinct ',' + QUOTENAME(country +'_'+c.col)
from mytransactions
cross apply
(
select 'TotalCount' col
union all
select 'TotalAmount'
) c
FOR XML PATH(''), TYPE
).value('.', 'NVARCHAR(MAX)')
,1,1,'')
select @colsName
= STUFF((SELECT distinct ', ' + QUOTENAME(country +'_'+c.col)
+' as ['
+ country + case when c.col = 'TotalCount' then ' # of Transactions]' else 'Total $ Amount]' end
from mytransactions
cross apply
(
select 'TotalCount' col
union all
select 'TotalAmount'
) c
FOR XML PATH(''), TYPE
).value('.', 'NVARCHAR(MAX)')
,1,1,'')
set @query
= 'SELECT chardate, ' + @colsName + '
from
(
select
numericmonth,
chardate,
country +''_''+col col,
value
from
(
select numericmonth,
country,
chardate,
cast(totalcount as numeric(10, 2)) totalcount,
cast(totalamount as numeric(10, 2)) totalamount
from mytransactions
) src
unpivot
(
value
for col in (totalcount, totalamount)
) unpiv
) s
pivot
(
sum(value)
for col in (' + @cols + ')
) p
order by numericmonth'
execute(@query)
Both give the result:
| CHARDATE | AUSTRALIA # OF TRANSACTIONS | AUSTRALIA TOTAL $ AMOUNT | AUSTRIA # OF TRANSACTIONS | AUSTRIA TOTAL $ AMOUNT |
--------------------------------------------------------------------------------------------------------------------------------------
| Jul-12 | 36 | 699.96 | 11 | 257.82 |
| Aug-12 | 44 | 1368.71 | 5 | 126.55 |
| Sep-12 | 52 | 1161.33 | 7 | 92.11 |
| Oct-12 | 50 | 1099.84 | 12 | 103.56 |
| Nov-12 | 38 | 1078.94 | 21 | 377.68 |
| Dec-12 | 63 | 1668.23 | 3 | 14.35 |
If you are using a string resource xml file (supports HTML tags), it can be done using<b> </b>
, <i> </i>
and <u> </u>
.
<resources>
<string name="your_string_here">
This is an <u>underline</u>.
</string>
</resources>
If you want to underline something from code use:
TextView tv = (TextView) view.findViewById(R.id.tv);
SpannableString content = new SpannableString("Content");
content.setSpan(new UnderlineSpan(), 0, content.length(), 0);
tv.setText(content);
Hope this helps
Q: Is this ok?
A: yes
Q: Is this expected?
Yes, this is expected (if you are using react-redux).
Q: Is this an anti-pattern?
A: No, this is not an anti-pattern.
It's called "connecting" your component or "making it smart". It's by design.
It allows you to decouple your component from your state an additional time which increases the modularity of your code. It also allows you to simplify your component state as a subset of your application state which, in fact, helps you comply with the Redux pattern.
Think about it this way: a store is supposed to contain the entire state of your application.
For large applications, this could contain dozens of properties nested many layers deep.
You don't want to haul all that around on each call (expensive).
Without mapStateToProps
or some analog thereof, you would be tempted to carve up your state another way to improve performance/simplify.
Only this realy reloads page (Today)
<input type="button" value="Refresh Page" onClick="location.href=location.href">
Others do not exactly reload. They keep values inside text boxes.
If you are going to have a Button, you can update the data with a setState inside the onPress. SetState will then re-render your FlatList.
If this is what you want ...simply add button inside the Fragment.
<fragment xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:tools="http://schemas.android.com/tools"
android:id="@+id/map"
android:name="com.google.android.gms.maps.SupportMapFragment"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context="com.example.LocationChooser">
<Button
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_gravity="right|top"
android:text="Demo Button"
android:padding="10dp"
android:layout_marginTop="20dp"
android:paddingRight="10dp"/>
</fragment>
sudo apt-get install phpx.x-gd
sudo service apache2 restart
x.x is the versión php.
What I did:
return to clients:
byte[] result = ****encrypted data****;
String str = Base64.encodeBase64String(result);
return str;
receive from clients:
byte[] bytes = Base64.decodeBase64(str);
your data will be transferred in this format:
OpfyN9paAouZ2Pw+gDgGsDWzjIphmaZbUyFx5oRIN1kkQ1tDbgoi84dRfklf1OZVdpAV7TonlTDHBOr93EXIEBoY1vuQnKXaG+CJyIfrCWbEENJ0gOVBr9W3OlFcGsZW5Cf9uirSmx/JLLxTrejZzbgq3lpToYc3vkyPy5Y/oFWYljy/3OcC/S458uZFOc/FfDqWGtT9pTUdxLDOwQ6EMe0oJBlMXm8J2tGnRja4F/aVHfQddha2nUMi6zlvAm8i9KnsWmQG//ok25EHDbrFBP2Ia/6Bx/SGS4skk/0couKwcPVXtTq8qpNh/aYK1mclg7TBKHfF+DHppwd30VULpA==
To reference images in public there are two ways I know how to do it straight forward. One is like above from Homam Bahrani.
using
<img src={process.env.PUBLIC_URL + '/yourPathHere.jpg'} />
And since this works you really don't need anything else but, this also works...
<img src={window.location.origin + '/yourPathHere.jpg'} />
function check (list){
var foundRepeatingValue = false;
var newList = [];
for(i=0;i<list.length;i++){
var thisValue = list[i];
if(i>0){
if(newList.indexOf(thisValue)>-1){
foundRepeatingValue = true;
console.log("getting repeated");
return true;
}
} newList.push(thisValue);
} return false;
}
var list1 = ["dse","dfg","dse"];
check(list1);
Output:
getting repeated
true
Why is this happening?
The entire ext/mysql
PHP extension, which provides all functions named with the prefix mysql_
, was officially deprecated in PHP v5.5.0 and removed in PHP v7.
It was originally introduced in PHP v2.0 (November 1997) for MySQL v3.20, and no new features have been added since 2006. Coupled with the lack of new features are difficulties in maintaining such old code amidst complex security vulnerabilities.
The manual has contained warnings against its use in new code since June 2011.
How can I fix it?
As the error message suggests, there are two other MySQL extensions that you can consider: MySQLi and PDO_MySQL, either of which can be used instead of ext/mysql
. Both have been in PHP core since v5.0, so if you're using a version that is throwing these deprecation errors then you can almost certainly just start using them right away—i.e. without any installation effort.
They differ slightly, but offer a number of advantages over the old extension including API support for transactions, stored procedures and prepared statements (thereby providing the best way to defeat SQL injection attacks). PHP developer Ulf Wendel has written a thorough comparison of the features.
Hashphp.org has an excellent tutorial on migrating from ext/mysql
to PDO.
I understand that it's possible to suppress deprecation errors by setting
error_reporting
inphp.ini
to excludeE_DEPRECATED
:error_reporting = E_ALL ^ E_DEPRECATED
What will happen if I do that?
Yes, it is possible to suppress such error messages and continue using the old ext/mysql
extension for the time being. But you really shouldn't do this—this is a final warning from the developers that the extension may not be bundled with future versions of PHP (indeed, as already mentioned, it has been removed from PHP v7). Instead, you should take this opportunity to migrate your application now, before it's too late.
Note also that this technique will suppress all E_DEPRECATED
messages, not just those to do with the ext/mysql
extension: therefore you may be unaware of other upcoming changes to PHP that would affect your application code. It is, of course, possible to only suppress errors that arise on the expression at issue by using PHP's error control operator—i.e. prepending the relevant line with @
—however this will suppress all errors raised by that expression, not just E_DEPRECATED
ones.
You are starting a new project.
There is absolutely no reason to use ext/mysql
—choose one of the other, more modern, extensions instead and reap the rewards of the benefits they offer.
You have (your own) legacy codebase that currently depends upon ext/mysql
.
It would be wise to perform regression testing: you really shouldn't be changing anything (especially upgrading PHP) until you have identified all of the potential areas of impact, planned around each of them and then thoroughly tested your solution in a staging environment.
Following good coding practice, your application was developed in a loosely integrated/modular fashion and the database access methods are all self-contained in one place that can easily be swapped out for one of the new extensions.
Spend half an hour rewriting this module to use one of the other, more modern, extensions; test thoroughly. You can later introduce further refinements to reap the rewards of the benefits they offer.
The database access methods are scattered all over the place and cannot easily be swapped out for one of the new extensions.
Consider whether you really need to upgrade to PHP v5.5 at this time.
You should begin planning to replace ext/mysql
with one of the other, more modern, extensions in order that you can reap the rewards of the benefits they offer; you might also use it as an opportunity to refactor your database access methods into a more modular structure.
However, if you have an urgent need to upgrade PHP right away, you might consider suppressing deprecation errors for the time being: but first be sure to identify any other deprecation errors that are also being thrown.
You are using a third party project that depends upon ext/mysql
.
Consider whether you really need to upgrade to PHP v5.5 at this time.
Check whether the developer has released any fixes, workarounds or guidance in relation to this specific issue; or, if not, pressure them to do so by bringing this matter to their attention. If you have an urgent need to upgrade PHP right away, you might consider suppressing deprecation errors for the time being: but first be sure to identify any other deprecation errors that are also being thrown.
It is absolutely essential to perform regression testing.
NP-complete problems are those problems that are both NP-Hard and in the complexity class NP. Therefore, to show that any given problem is NP-complete, you need to show that the problem is both in NP and that it is NP-hard.
Problems that are in the NP complexity class can be solved non-deterministically in polynomial time and a possible solution (i.e., a certificate) for a problem in NP can be verified for correctness in polynomial time.
An example of a non-deterministic solution to the k-clique problem would be something like:
1) randomly select k nodes from a graph
2) verify that these k nodes form a clique.
The above strategy is polynomial in the size of the input graph and therefore the k-clique problem is in NP.
Note that all problems deterministically solvable in polynomial time are also in NP.
Showing that a problem is NP-hard typically involves a reduction from some other NP-hard problem to your problem using a polynomial time mapping: http://en.wikipedia.org/wiki/Reduction_(complexity)
MySQL implicitly closed the database connection because the connection has been inactive for too long (34,247,052 milliseconds ˜ 9.5 hours).
If your program then fetches a bad connection from the connection-pool that causes the MySQLNonTransientConnectionException: No operations allowed after connection closed
.
MySQL suggests:
You should consider either expiring and/or testing connection validity before use in your application, increasing the server configured values for client timeouts, or using the Connector/J connection property
autoReconnect=true
to avoid this problem.
We need to cover at least these aspects to provide a comprehensive answer/comparison (in no particular order of importance): Speed
, Memory usage
, Syntax
and Features
.
My intent is to cover each one of these as clearly as possible from data.table perspective.
Note: unless explicitly mentioned otherwise, by referring to dplyr, we refer to dplyr's data.frame interface whose internals are in C++ using Rcpp.
The data.table syntax is consistent in its form - DT[i, j, by]
. To keep i
, j
and by
together is by design. By keeping related operations together, it allows to easily optimise operations for speed and more importantly memory usage, and also provide some powerful features, all while maintaining the consistency in syntax.
Quite a few benchmarks (though mostly on grouping operations) have been added to the question already showing data.table gets faster than dplyr as the number of groups and/or rows to group by increase, including benchmarks by Matt on grouping from 10 million to 2 billion rows (100GB in RAM) on 100 - 10 million groups and varying grouping columns, which also compares pandas
. See also updated benchmarks, which include Spark
and pydatatable
as well.
On benchmarks, it would be great to cover these remaining aspects as well:
Grouping operations involving a subset of rows - i.e., DT[x > val, sum(y), by = z]
type operations.
Benchmark other operations such as update and joins.
Also benchmark memory footprint for each operation in addition to runtime.
Operations involving filter()
or slice()
in dplyr can be memory inefficient (on both data.frames and data.tables). See this post.
Note that Hadley's comment talks about speed (that dplyr is plentiful fast for him), whereas the major concern here is memory.
data.table interface at the moment allows one to modify/update columns by reference (note that we don't need to re-assign the result back to a variable).
# sub-assign by reference, updates 'y' in-place
DT[x >= 1L, y := NA]
But dplyr will never update by reference. The dplyr equivalent would be (note that the result needs to be re-assigned):
# copies the entire 'y' column
ans <- DF %>% mutate(y = replace(y, which(x >= 1L), NA))
A concern for this is referential transparency. Updating a data.table object by reference, especially within a function may not be always desirable. But this is an incredibly useful feature: see this and this posts for interesting cases. And we want to keep it.
Therefore we are working towards exporting shallow()
function in data.table that will provide the user with both possibilities. For example, if it is desirable to not modify the input data.table within a function, one can then do:
foo <- function(DT) {
DT = shallow(DT) ## shallow copy DT
DT[, newcol := 1L] ## does not affect the original DT
DT[x > 2L, newcol := 2L] ## no need to copy (internally), as this column exists only in shallow copied DT
DT[x > 2L, x := 3L] ## have to copy (like base R / dplyr does always); otherwise original DT will
## also get modified.
}
By not using shallow()
, the old functionality is retained:
bar <- function(DT) {
DT[, newcol := 1L] ## old behaviour, original DT gets updated by reference
DT[x > 2L, x := 3L] ## old behaviour, update column x in original DT.
}
By creating a shallow copy using shallow()
, we understand that you don't want to modify the original object. We take care of everything internally to ensure that while also ensuring to copy columns you modify only when it is absolutely necessary. When implemented, this should settle the referential transparency issue altogether while providing the user with both possibilties.
Also, once
shallow()
is exported dplyr's data.table interface should avoid almost all copies. So those who prefer dplyr's syntax can use it with data.tables.But it will still lack many features that data.table provides, including (sub)-assignment by reference.
Aggregate while joining:
Suppose you have two data.tables as follows:
DT1 = data.table(x=c(1,1,1,1,2,2,2,2), y=c("a", "a", "b", "b"), z=1:8, key=c("x", "y"))
# x y z
# 1: 1 a 1
# 2: 1 a 2
# 3: 1 b 3
# 4: 1 b 4
# 5: 2 a 5
# 6: 2 a 6
# 7: 2 b 7
# 8: 2 b 8
DT2 = data.table(x=1:2, y=c("a", "b"), mul=4:3, key=c("x", "y"))
# x y mul
# 1: 1 a 4
# 2: 2 b 3
And you would like to get sum(z) * mul
for each row in DT2
while joining by columns x,y
. We can either:
1) aggregate DT1
to get sum(z)
, 2) perform a join and 3) multiply (or)
# data.table way
DT1[, .(z = sum(z)), keyby = .(x,y)][DT2][, z := z*mul][]
# dplyr equivalent
DF1 %>% group_by(x, y) %>% summarise(z = sum(z)) %>%
right_join(DF2) %>% mutate(z = z * mul)
2) do it all in one go (using by = .EACHI
feature):
DT1[DT2, list(z=sum(z) * mul), by = .EACHI]
What is the advantage?
We don't have to allocate memory for the intermediate result.
We don't have to group/hash twice (one for aggregation and other for joining).
And more importantly, the operation what we wanted to perform is clear by looking at j
in (2).
Check this post for a detailed explanation of by = .EACHI
. No intermediate results are materialised, and the join+aggregate is performed all in one go.
Have a look at this, this and this posts for real usage scenarios.
In dplyr
you would have to join and aggregate or aggregate first and then join, neither of which are as efficient, in terms of memory (which in turn translates to speed).
Update and joins:
Consider the data.table code shown below:
DT1[DT2, col := i.mul]
adds/updates DT1
's column col
with mul
from DT2
on those rows where DT2
's key column matches DT1
. I don't think there is an exact equivalent of this operation in dplyr
, i.e., without avoiding a *_join
operation, which would have to copy the entire DT1
just to add a new column to it, which is unnecessary.
Check this post for a real usage scenario.
To summarise, it is important to realise that every bit of optimisation matters. As Grace Hopper would say, Mind your nanoseconds!
Let's now look at syntax. Hadley commented here:
Data tables are extremely fast but I think their concision makes it harder to learn and code that uses it is harder to read after you have written it ...
I find this remark pointless because it is very subjective. What we can perhaps try is to contrast consistency in syntax. We will compare data.table and dplyr syntax side-by-side.
We will work with the dummy data shown below:
DT = data.table(x=1:10, y=11:20, z=rep(1:2, each=5))
DF = as.data.frame(DT)
Basic aggregation/update operations.
# case (a)
DT[, sum(y), by = z] ## data.table syntax
DF %>% group_by(z) %>% summarise(sum(y)) ## dplyr syntax
DT[, y := cumsum(y), by = z]
ans <- DF %>% group_by(z) %>% mutate(y = cumsum(y))
# case (b)
DT[x > 2, sum(y), by = z]
DF %>% filter(x>2) %>% group_by(z) %>% summarise(sum(y))
DT[x > 2, y := cumsum(y), by = z]
ans <- DF %>% group_by(z) %>% mutate(y = replace(y, which(x > 2), cumsum(y)))
# case (c)
DT[, if(any(x > 5L)) y[1L]-y[2L] else y[2L], by = z]
DF %>% group_by(z) %>% summarise(if (any(x > 5L)) y[1L] - y[2L] else y[2L])
DT[, if(any(x > 5L)) y[1L] - y[2L], by = z]
DF %>% group_by(z) %>% filter(any(x > 5L)) %>% summarise(y[1L] - y[2L])
data.table syntax is compact and dplyr's quite verbose. Things are more or less equivalent in case (a).
In case (b), we had to use filter()
in dplyr while summarising. But while updating, we had to move the logic inside mutate()
. In data.table however, we express both operations with the same logic - operate on rows where x > 2
, but in first case, get sum(y)
, whereas in the second case update those rows for y
with its cumulative sum.
This is what we mean when we say the DT[i, j, by]
form is consistent.
Similarly in case (c), when we have if-else
condition, we are able to express the logic "as-is" in both data.table and dplyr. However, if we would like to return just those rows where the if
condition satisfies and skip otherwise, we cannot use summarise()
directly (AFAICT). We have to filter()
first and then summarise because summarise()
always expects a single value.
While it returns the same result, using filter()
here makes the actual operation less obvious.
It might very well be possible to use filter()
in the first case as well (does not seem obvious to me), but my point is that we should not have to.
Aggregation / update on multiple columns
# case (a)
DT[, lapply(.SD, sum), by = z] ## data.table syntax
DF %>% group_by(z) %>% summarise_each(funs(sum)) ## dplyr syntax
DT[, (cols) := lapply(.SD, sum), by = z]
ans <- DF %>% group_by(z) %>% mutate_each(funs(sum))
# case (b)
DT[, c(lapply(.SD, sum), lapply(.SD, mean)), by = z]
DF %>% group_by(z) %>% summarise_each(funs(sum, mean))
# case (c)
DT[, c(.N, lapply(.SD, sum)), by = z]
DF %>% group_by(z) %>% summarise_each(funs(n(), mean))
In case (a), the codes are more or less equivalent. data.table uses familiar base function lapply()
, whereas dplyr
introduces *_each()
along with a bunch of functions to funs()
.
data.table's :=
requires column names to be provided, whereas dplyr generates it automatically.
In case (b), dplyr's syntax is relatively straightforward. Improving aggregations/updates on multiple functions is on data.table's list.
In case (c) though, dplyr would return n()
as many times as many columns, instead of just once. In data.table, all we need to do is to return a list in j
. Each element of the list will become a column in the result. So, we can use, once again, the familiar base function c()
to concatenate .N
to a list
which returns a list
.
Note: Once again, in data.table, all we need to do is return a list in
j
. Each element of the list will become a column in result. You can usec()
,as.list()
,lapply()
,list()
etc... base functions to accomplish this, without having to learn any new functions.You will need to learn just the special variables -
.N
and.SD
at least. The equivalent in dplyr aren()
and.
Joins
dplyr provides separate functions for each type of join where as data.table allows joins using the same syntax DT[i, j, by]
(and with reason). It also provides an equivalent merge.data.table()
function as an alternative.
setkey(DT1, x, y)
# 1. normal join
DT1[DT2] ## data.table syntax
left_join(DT2, DT1) ## dplyr syntax
# 2. select columns while join
DT1[DT2, .(z, i.mul)]
left_join(select(DT2, x, y, mul), select(DT1, x, y, z))
# 3. aggregate while join
DT1[DT2, .(sum(z) * i.mul), by = .EACHI]
DF1 %>% group_by(x, y) %>% summarise(z = sum(z)) %>%
inner_join(DF2) %>% mutate(z = z*mul) %>% select(-mul)
# 4. update while join
DT1[DT2, z := cumsum(z) * i.mul, by = .EACHI]
??
# 5. rolling join
DT1[DT2, roll = -Inf]
??
# 6. other arguments to control output
DT1[DT2, mult = "first"]
??
Some might find a separate function for each joins much nicer (left, right, inner, anti, semi etc), whereas as others might like data.table's DT[i, j, by]
, or merge()
which is similar to base R.
However dplyr joins do just that. Nothing more. Nothing less.
data.tables can select columns while joining (2), and in dplyr you will need to select()
first on both data.frames before to join as shown above. Otherwise you would materialiase the join with unnecessary columns only to remove them later and that is inefficient.
data.tables can aggregate while joining (3) and also update while joining (4), using by = .EACHI
feature. Why materialse the entire join result to add/update just a few columns?
data.table is capable of rolling joins (5) - roll forward, LOCF, roll backward, NOCB, nearest.
data.table also has mult =
argument which selects first, last or all matches (6).
data.table has allow.cartesian = TRUE
argument to protect from accidental invalid joins.
Once again, the syntax is consistent with
DT[i, j, by]
with additional arguments allowing for controlling the output further.
do()
...
dplyr's summarise is specially designed for functions that return a single value. If your function returns multiple/unequal values, you will have to resort to do()
. You have to know beforehand about all your functions return value.
DT[, list(x[1], y[1]), by = z] ## data.table syntax
DF %>% group_by(z) %>% summarise(x[1], y[1]) ## dplyr syntax
DT[, list(x[1:2], y[1]), by = z]
DF %>% group_by(z) %>% do(data.frame(.$x[1:2], .$y[1]))
DT[, quantile(x, 0.25), by = z]
DF %>% group_by(z) %>% summarise(quantile(x, 0.25))
DT[, quantile(x, c(0.25, 0.75)), by = z]
DF %>% group_by(z) %>% do(data.frame(quantile(.$x, c(0.25, 0.75))))
DT[, as.list(summary(x)), by = z]
DF %>% group_by(z) %>% do(data.frame(as.list(summary(.$x))))
.SD
's equivalent is .
In data.table, you can throw pretty much anything in j
- the only thing to remember is for it to return a list so that each element of the list gets converted to a column.
In dplyr, cannot do that. Have to resort to do()
depending on how sure you are as to whether your function would always return a single value. And it is quite slow.
Once again, data.table's syntax is consistent with
DT[i, j, by]
. We can just keep throwing expressions inj
without having to worry about these things.
Have a look at this SO question and this one. I wonder if it would be possible to express the answer as straightforward using dplyr's syntax...
To summarise, I have particularly highlighted several instances where dplyr's syntax is either inefficient, limited or fails to make operations straightforward. This is particularly because data.table gets quite a bit of backlash about "harder to read/learn" syntax (like the one pasted/linked above). Most posts that cover dplyr talk about most straightforward operations. And that is great. But it is important to realise its syntax and feature limitations as well, and I am yet to see a post on it.
data.table has its quirks as well (some of which I have pointed out that we are attempting to fix). We are also attempting to improve data.table's joins as I have highlighted here.
But one should also consider the number of features that dplyr lacks in comparison to data.table.
I have pointed out most of the features here and also in this post. In addition:
fread - fast file reader has been available for a long time now.
fwrite - a parallelised fast file writer is now available. See this post for a detailed explanation on the implementation and #1664 for keeping track of further developments.
Automatic indexing - another handy feature to optimise base R syntax as is, internally.
Ad-hoc grouping: dplyr
automatically sorts the results by grouping variables during summarise()
, which may not be always desirable.
Numerous advantages in data.table joins (for speed / memory efficiency and syntax) mentioned above.
Non-equi joins: Allows joins using other operators <=, <, >, >=
along with all other advantages of data.table joins.
Overlapping range joins was implemented in data.table recently. Check this post for an overview with benchmarks.
setorder()
function in data.table that allows really fast reordering of data.tables by reference.
dplyr provides interface to databases using the same syntax, which data.table does not at the moment.
data.table
provides faster equivalents of set operations (written by Jan Gorecki) - fsetdiff
, fintersect
, funion
and fsetequal
with additional all
argument (as in SQL).
data.table loads cleanly with no masking warnings and has a mechanism described here for [.data.frame
compatibility when passed to any R package. dplyr changes base functions filter
, lag
and [
which can cause problems; e.g. here and here.
Finally:
On databases - there is no reason why data.table cannot provide similar interface, but this is not a priority now. It might get bumped up if users would very much like that feature.. not sure.
On parallelism - Everything is difficult, until someone goes ahead and does it. Of course it will take effort (being thread safe).
OpenMP
.select * from test
where date between '03/19/2014' and '03/19/2014 23:59:59'
This is a realy bad answer. For two reasons.
1. What happens with times like 23.59.59.700 etc. There are times larger than 23:59:59 and the next day.
2. The behaviour depends on the datatype. The query behaves differently for datetime/date/datetime2 types.
Testing with 23:59:59.999 makes it even worse because depending on the datetype you get different roundings.
select convert (varchar(40),convert(date , '2014-03-19 23:59:59.999'))
select convert (varchar(40),convert(datetime , '2014-03-19 23:59:59.999'))
select convert (varchar(40),convert(datetime2 , '2014-03-19 23:59:59.999'))
-- For date the value is 'chopped'. -- For datetime the value is rounded up to the next date. (Nearest value). -- For datetime2 the value is precise.
Try this
$(document).ready(function () {
$("#ckbCheckAll").click(function () {
$("#checkBoxes input").prop('checked', $(this).prop('checked'));
});
});
That should do it :)
I got the same problem with my customized theme that used Holo.Light as its parent. In grayed text Android Studio indicated that some attributes were missing. When I added these missing attributes as follows, the rendering problems went away -
<item name="android:textEditSuggestionItemLayout"></item>
<item name="android:textEditSuggestionContainerLayout"></item>
<item name="android:textEditSuggestionHighlightStyle"></item>
Even though they introduced errors in my style's theme, they caused no problems in rendering the activity designs or building my app.
You could do this with a list comprehension
l = [x for i in range(10)];
Doesn't get simpler than this:
<a href="https://twitter.com/intent/tweet?text=optional%20promo%20text%20http://example.com/foo.htm?bar=123&baz=456" target="_blank">Tweet</a>
To Hex:
string hex = intValue.ToString("X");
To int:
int intValue = int.Parse(hex, System.Globalization.NumberStyles.HexNumber)
Simply select the class, right click and choose rename (probably F2 will also do). You can also select the class name in the source file, right click, choose Source, Refactor and rename. In both cases, both the class and the filename will be changed.
While a few of these answers will get you in the ballpark, you cannot do what you're trying to do with arbitrary dates for SqlServer 2005 and earlier because of daylight savings time. Using the difference between the current local and current UTC will give me the offset as it exists today. I have not found a way to determine what the offset would have been for the date in question.
That said, I know that SqlServer 2008 provides some new date functions that may address that issue, but folks using an earlier version need to be aware of the limitations.
Our approach is to persist UTC and perform the conversion on the client side where we have more control over the conversion's accuracy.
Let me show you a little trick.
As Arkanciscan said, you can use CSS3 transitions. But his solution looks different from the original tag.
What you really need to do is this:
@keyframes blink {_x000D_
50% {_x000D_
opacity: 0.0;_x000D_
}_x000D_
}_x000D_
@-webkit-keyframes blink {_x000D_
50% {_x000D_
opacity: 0.0;_x000D_
}_x000D_
}_x000D_
.blink {_x000D_
animation: blink 1s step-start 0s infinite;_x000D_
-webkit-animation: blink 1s step-start 0s infinite;_x000D_
}
_x000D_
<span class="blink">Blink</span>
_x000D_
I finally understood what was going on.
When creating an integration test on a statement saving an object, it is recommended to flush the entity manager so as to avoid any false negative, that is, to avoid a test running fine but whose operation would fail when run in production. Indeed, the test may run fine simply because the first level cache is not flushed and no writing hits the database. To avoid this false negative integration test use an explicit flush in the test body. Note that the production code should never need to use any explicit flush as it is the role of the ORM to decide when to flush.
When creating an integration test on an update statement, it may be necessary to clear the entity manager so as to reload the first level cache. Indeed, an update statement completely bypasses the first level cache and writes directly to the database. The first level cache is then out of sync and reflects the old value of the updated object. To avoid this stale state of the object, use an explicit clear in the test body. Note that the production code should never need to use any explicit clear as it is the role of the ORM to decide when to clear.
My test now works just fine.
top5 = array[:5]
array[start:stop:step]
array[start:]
, array[:stop]
, array[::step]
import itertools
top5 = itertools.islice(my_list, 5) # grab the first five elements
You can't slice a generator directly in Python. itertools.islice()
will wrap an object in a new slicing generator using the syntax itertools.islice(generator, start, stop, step)
Remember, slicing a generator will exhaust it partially. If you want to keep the entire generator intact, perhaps turn it into a tuple or list first, like: result = tuple(generator)
Description and examples can be found in IEEE Std 1800-2017 § 11.5.1 "Vector bit-select and part-select addressing". First IEEE appearance is IEEE 1364-2001 (Verilog) § 4.2.1 "Vector bit-select and part-select addressing". Here is an direct example from the LRM:
logic [31: 0] a_vect; logic [0 :31] b_vect; logic [63: 0] dword; integer sel; a_vect[ 0 +: 8] // == a_vect[ 7 : 0] a_vect[15 -: 8] // == a_vect[15 : 8] b_vect[ 0 +: 8] // == b_vect[0 : 7] b_vect[15 -: 8] // == b_vect[8 :15] dword[8*sel +: 8] // variable part-select with fixed width
If sel
is 0 then dword[8*(0) +: 8] == dword[7:0]
If sel
is 7 then dword[8*(7) +: 8] == dword[63:56]
The value to the left always the starting index. The number to the right is the width and must be a positive constant. the +
and -
indicates to select the bits of a higher or lower index value then the starting index.
Assuming address
is in little endian ([msb:lsb]) format, then if(address[2*pointer+:2])
is the equivalent of if({address[2*pointer+1],address[2*pointer]})
I had a similar problem with ionic where I was trying to load the native camera as soon as I select the camera tab. I resolved the issue by setting the controller to the ion-view component for the camera tab (in tabs.html) and then calling the $scope method that loads my camera (addImage).
In www/templates/tabs.html
<ion-tab title="Camera" icon-off="ion-camera" icon-on="ion-camera" href="#/tab/chats" ng-controller="AddMediaCtrl" ng-click="addImage()">
<ion-nav-view name="tab-chats"></ion-nav-view>
</ion-tab>
The addImage method, defined in AddMediaCtrl loads the native camera every time the user clicks the "Camera" tab. I did not have to change anything in the angular cache for this to work. I hope this helps.
If you use the relative version on http://www.foo.com/abc your browser will look at http://www.foo.com/abc/kitten.png for the image and would get 404 - Not found.
In the accepted answer's update you don't see the example for the to_date
function, so another solution using it would be:
from pyspark.sql import functions as F
df = df.withColumn(
'new_date',
F.to_date(
F.unix_timestamp('STRINGCOLUMN', 'MM-dd-yyyy').cast('timestamp')))
Based on this code (which you provided in response to Alex's answer):
Editable newTxt=(Editable)userName1.getText();
String newString = newTxt.toString();
It looks like you're trying to get the text out of a TextView or EditText. If that's the case then this should work:
String newString = userName1.getText().toString();
Kyle's solution worked perfectly fine for me so I made my research in order to avoid any Js and CSS, but just sticking with HTML.
Adding a value of selected
to the item we want to appear as a header forces it to show in the first place as a placeholder.
Something like:
<option selected disabled>Choose here</option>
The complete markup should be along these lines:
<select>
<option selected disabled>Choose here</option>
<option value="1">One</option>
<option value="2">Two</option>
<option value="3">Three</option>
<option value="4">Four</option>
<option value="5">Five</option>
</select>
You can take a look at this fiddle, and here's the result:
If you do not want the sort of placeholder text to appear listed in the options once a user clicks on the select box just add the hidden
attribute like so:
<select>
<option selected disabled hidden>Choose here</option>
<option value="1">One</option>
<option value="2">Two</option>
<option value="3">Three</option>
<option value="4">Four</option>
<option value="5">Five</option>
</select>
Check the fiddle here and the screenshot below.
Here is the solution:
<select>
<option style="display:none;" selected>Select language</option>
<option>Option 1</option>
<option>Option 2</option>
</select>
This can be easily accomplished using PHP 'fmod' function. The code below is specific to 10 but you can change it to any number.
$num=97;
$r=fmod($num,10);
$r=10-$r;
$r=$num+$r;
return $r;
OUTPUT: 100
I can't add a comment above as I do not have enough reputation, but the above answer was nearly perfect for me, except I had to add
type: "POST"
to the .ajax call. I was scratching my head for a few minutes trying to figure out what I had done wrong, that's all it needed and works a treat. So this is the whole snippet:
Full credit to the answer above me, this is just a small tweak to that. This is just in case anyone else gets stuck and can't see the obvious.
$.ajax({
url: 'Your url here',
data: formData,
type: "POST", //ADDED THIS LINE
// THIS MUST BE DONE FOR FILE UPLOADING
contentType: false,
processData: false,
// ... Other options like success and etc
})
I had a case where I was entering text into a field after which the text would be removed automatically. Turned out it was due to some site functionality where you had to press the enter key after entering the text into the field. So, after sending your barcode text with sendKeys method, send 'enter' directly after it. Note that you will have to import the selenium Keys class. See my code below.
import org.openqa.selenium.Keys;
String barcode="0000000047166";
WebElement element_enter = driver.findElement(By.xpath("//*[@id='div-barcode']"));
element_enter.findElement(By.xpath("your xpath")).sendKeys(barcode);
element_enter.sendKeys(Keys.RETURN); // this will result in the return key being pressed upon the text field
I hope it helps..
The Windows FTP command-line client (ftp.exe
) does not support the passive mode, on any version of Windows. It makes it pretty useless nowadays due to ubiquitous firewalls and NATs.
Using the quote pasv
won't help. It switches only the server to the passive mode, but not the client.
Use any thirdparty Windows FTP command-line client instead. Most other support the passive mode.
For example WinSCP defaults to the passive mode and there's a guide available for converting Windows FTP script to WinSCP script. If you are starting from the scratch, see the guide to automating file transfers to FTP using WinSCP. Also, WinSCP GUI can generate a script template for you.
(I'm the author of WinSCP)
In windows server 2012, even after installing asp.net you might run into this issue.
Check for "Http activation" feature. This feature is present under Web services as well.
Make sure you add the above and everything should be awesome for you !!!
Further modification from JC_VA, take what he has, and replace the MyModelConverter with...
public class MyModelConverter : JsonConverter
{
//objectType is the type as specified for List<myModel> (i.e. myModel)
public override object ReadJson(JsonReader reader, Type objectType, object existingValue, JsonSerializer serializer)
{
var token = JToken.Load(reader); //json from myModelList > model
var list = Activator.CreateInstance(objectType) as System.Collections.IList; // new list to return
var itemType = objectType.GenericTypeArguments[0]; // type of the list (myModel)
if (token.Type.ToString() == "Object") //Object
{
var child = token.Children();
var newObject = Activator.CreateInstance(itemType);
serializer.Populate(token.CreateReader(), newObject);
list.Add(newObject);
}
else //Array
{
foreach (var child in token.Children())
{
var newObject = Activator.CreateInstance(itemType);
serializer.Populate(child.CreateReader(), newObject);
list.Add(newObject);
}
}
return list;
}
public override bool CanConvert(Type objectType)
{
return objectType.IsGenericType && (objectType.GetGenericTypeDefinition() == typeof(List<>));
}
public override bool CanWrite => false;
public override void WriteJson(JsonWriter writer, object value, JsonSerializer serializer) => throw new NotImplementedException();
}
This should work for json that is either
myModelList{
model: [{ ... object ... }]
}
or
myModelList{
model: { ... object ... }
}
they will both end up being parsed as if they were
myModelList{
model: [{ ... object ... }]
}
simply use find tool.
find absolute_path
displays full paths on my Linux machine, while
find relative_path
will not.
call.request().toString();
=iferror(1/ (1/ H15+G16-F16 ), "")
this way avoids repeating the central calculation (which can often be much longer or more processor hungry than the one you have here...
enjoy
It is wrong to say that one is a better choice than the other (or both wouldn't have been given their own purpose in the spec). It may even be worth noting that StackOverflow makes extensive use of px units. It is not the poor choice Spoike was told it was.
Definition of units
px is an absolute unit of measurement (like in, pt, or cm) that also happens to be 1/96 of an in unit (more on why later). Because it is an absolute measurement, it may be used any time you want to define something to be a particular size, rather than being proportional to something else like the size of the browser window or the font size.
Like all the other absolute units, px units don't scale according to the width of the browser window. Thus, if your entire page design uses absolute units such as px rather than %, it won't adapt to the width of the browser. This is not inherently good or bad, just a choice that the designer needs to make between adhering to an exact size and being inflexible versus stretching but in the process not adhering to an exact size. It would be typical for a site to have a mix of fixed-size and flexible-sized objects.
Fixed size elements often need to be incorporated into the page - such as advertising banners, logos or icons. This ensures you almost always need at least some px-based measurements in a design. Images, for example, will (by default) be scaled such that each pixel is 1*px* in size, so if you are designing around an image you'll need px units. It is also very useful for precise font sizing, and for border widths, where due to rounding it makes the most sense to use px units for the majority of screens.
All absolute measurements are rigidly related to each other; that is, 1in is always 96px, just as 1in is always 72pt. (Note that 1in is almost never actually a physical inch when talking about screen-based media). All absolute measurements assume a nominal screen resolution of 96ppi and a nominal viewing distance of a desktop monitor, and on such a screen one px will be equal to one physical pixel on the screen and one in will be equal to 96 physical pixels. On screens that differ significantly in either pixel density or viewing distance, or if the user has zoomed the page using the browser's zoom function, px will no longer necessarily relate to physical pixels.
em is not an absolute unit - it is a unit that is relative to the currently chosen font size. Unless you have overridden font style by setting your font size with an absolute unit (such as px or pt), this will be affected by the choice of fonts in the user's browser or OS if they have made one, so it does not make sense to use em as a general unit of length except where you specifically want it to scale as the font size scales.
Use em when you specifically want the size of something to depend on the current font size.
% is also a relative unit, in this case, relative to either the height or width of a parent element. They are a good alternative to px units for things like the total width of a design if your design does not rely on specific pixel sizes to set its size.
Using % units in your design allows your design to adapt to the width of the screen/device, whereas using an absolute unit such as px does not.
I tried the below and it worked well,
Install Dimensions Data Model and OLE DB Access
and follow the below steps in excel
Data->Get External Data ->From Other sources -> From Data Connection Wizard -> Other/Advanced-> SPSS MR DM-2 OLE DB Provider-> Metadata type as SPSS File(SAV)-> SPSS data file in Metadata Location->Finish
<!--
//THIS PROGRAM WILL UPLOAD IMAGE AND WILL RETRIVE FROM DATABASE. UNSING BLOB
(IF YOU HAVE ANY QUERY CONTACT:[email protected])
CREATE TABLE `images` (
`id` int(100) NOT NULL AUTO_INCREMENT,
`name` varchar(100) NOT NULL,
`image` longblob NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB ;
-->
<!-- this form is user to store images-->
<form action="index.php" method="post" enctype="multipart/form-data">
Enter the Image Name:<input type="text" name="image_name" id="" /><br />
<input name="image" id="image" accept="image/JPEG" type="file"><br /><br />
<input type="submit" value="submit" name="submit" />
</form>
<br /><br />
<!-- this form is user to display all the images-->
<form action="index.php" method="post" enctype="multipart/form-data">
Retrive all the images:
<input type="submit" value="submit" name="retrive" />
</form>
<?php
//THIS IS INDEX.PHP PAGE
//connect to database.db name is images
mysql_connect("", "", "") OR DIE (mysql_error());
mysql_select_db ("") OR DIE ("Unable to select db".mysql_error());
//to retrive send the page to another page
if(isset($_POST['retrive']))
{
header("location:search.php");
}
//to upload
if(isset($_POST['submit']))
{
if(isset($_FILES['image'])) {
$name=$_POST['image_name'];
$email=$_POST['mail'];
$fp=addslashes(file_get_contents($_FILES['image']['tmp_name'])); //will store the image to fp
}
// our sql query
$sql = "INSERT INTO images VALUES('null', '{$name}','{$fp}');";
mysql_query($sql) or die("Error in Query insert: " . mysql_error());
}
?>
<?php
//SEARCH.PHP PAGE
//connect to database.db name = images
mysql_connect("localhost", "root", "") OR DIE (mysql_error());
mysql_select_db ("image") OR DIE ("Unable to select db".mysql_error());
//display all the image present in the database
$msg="";
$sql="select * from images";
if(mysql_query($sql))
{
$res=mysql_query($sql);
while($row=mysql_fetch_array($res))
{
$id=$row['id'];
$name=$row['name'];
$image=$row['image'];
$msg.= '<a href="search.php?id='.$id.'"><img src="data:image/jpeg;base64,'.base64_encode($row['image']). ' " /> </a>';
}
}
else
$msg.="Query failed";
?>
<div>
<?php
echo $msg;
?>
You are using g++ 4.6 version you must invoke the flag -std=c++0x to compile
g++ -std=c++0x *.cpp -o output
I found a very simple solution to, (Pip - Fatal error in launcher:)
1) You must not have multiple environmental variables for the python path.
A) Goto Environmental Variables and delete Python27 in the path if you have Python 3.6.5 installed. Pip is confused by multiple paths!!!
I know its quite an old question but I didn't see any answer, which clarify that why and when its possible to break with return.
I would like to explain it with 2 simple examples:
1. Example: In this case, we have a simple iteration and we want to break with return true, if we can find the three.
function canFindThree() {
for(var i = 0; i < 5; i++) {
if(i === 3) {
return true;
}
}
}
if we call this function, it will simply return the true.
2. Example In this case, we want to iterate with jquery's each function, which takes anonymous function as parameter.
function canFindThree() {
var result = false;
$.each([1, 2, 3, 4, 5], function(key, value) {
if(value === 3) {
result = true;
return false; //This will only exit the anonymous function and stop the iteration immediatelly.
}
});
return result; //This will exit the function with return true;
}
If you want to show the rating bar in small size, then just copy and paste this code in your project.
<RatingBar
android:id="@+id/MyRating"
style="?android:attr/ratingBarStyleSmall"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_below="@id/getRating"
android:isIndicator="true"
android:numStars="5"
android:stepSize="0.1" />
I've been confused by the wide range of different naming schemes as well, and was just about to ask and answer my own question on this when I found this question here. I think my findings fit it well enough, and complement what's already here. My focus is looking for documentation on the various terms, and adding some more related terms that might crop up in other places.
Consider the following example:
package a.b;
class C {
static class D extends C {
}
D d;
D[] ds;
}
The simple name of D
is D
. That's just the part you wrote when declaring the class. Anonymous classes have no simple name. Class.getSimpleName()
returns this name or the empty string. It is possible for the simple name to contain a $
if you write it like this, since $
is a valid part of an identifier as per JLS section 3.8 (even if it is somewhat discouraged).
According to the JLS section 6.7, both a.b.C.D
and a.b.C.D.D.D
would be fully qualified names, but only a.b.C.D
would be the canonical name of D
. So every canonical name is a fully qualified name, but the converse is not always true. Class.getCanonicalName()
will return the canonical name or null
.
Class.getName()
is documented to return the binary name, as specified in JLS section 13.1. In this case it returns a.b.C$D
for D
and [La.b.C$D;
for D[]
.
This answer demonstrates that it is possible for two classes loaded by the same class loader to have the same canonical name but distinct binary names. Neither name is sufficient to reliably deduce the other: if you have the canonical name, you don't know which parts of the name are packages and which are containing classes. If you have the binary name, you don't know which $
were introduced as separators and which were part of some simple name. (The class file stores the binary name of the class itself and its enclosing class, which allows the runtime to make this distinction.)
Anonymous classes and local classes have no fully qualified names but still have a binary name. The same holds for classes nested inside such classes. Every class has a binary name.
Running javap -v -private
on a/b/C.class
shows that the bytecode refers to the type of d
as La/b/C$D;
and that of the array ds
as [La/b/C$D;
. These are called descriptors, and they are specified in JVMS section 4.3.
The class name a/b/C$D
used in both of these descriptors is what you get by replacing .
by /
in the binary name. The JVM spec apparently calls this the internal form of the binary name. JVMS section 4.2.1 describes it, and states that the difference from the binary name were for historical reasons.
The file name of a class in one of the typical filename-based class loaders is what you get if you interpret the /
in the internal form of the binary name as a directory separator, and append the file name extension .class
to it. It's resolved relative to the class path used by the class loader in question.
You can only use await
in an async
method, and Main
cannot be async
.
You'll have to use your own async
-compatible context, call Wait
on the returned Task
in the Main
method, or just ignore the returned Task
and just block on the call to Read
. Note that Wait
will wrap any exceptions in an AggregateException
.
If you want a good intro, see my async
/await
intro post.